Introduction
What ?
Brain is like a draft notebook. A place to store all my notes.
Why ?
While browsering dev.to, I found this article :
Then I decide to give this concept a shot.
Licence
You can do whatever the fuck you want with this.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright 2017 papey
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.
Dev
Golang
http package stuff
HTTP Status Code
Do not use number directly.
Don't do this
w.WriteHeader(404)
Do this
w.WriteHeader(http.StatusNotFound)
Error message
http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
Flusher
A Flusher can be used to allow an HTTP handler to flush buffered data to the client.
package main
import (
"fmt"
"io"
"net/http"
"time"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
ended := make(chan bool)
f, flushable := w.(http.Flusher)
if flushable {
go func() {
ticker := time.NewTicker(1 * time.Second)
for {
select {
case <-ticker.C:
fmt.Println("debug")
io.WriteString(w, "#")
f.Flush()
case <-ended:
return
case <-r.Context().Done():
return
}
}
}()
}
time.Sleep(5 * time.Second)
ended <- true
w.Write([]byte("ok"))
})
http.ListenAndServe(":8080", nil)
}
Working with packages
In go, every project, is, at some part, a package.
A package is simply a directory containing go source file.
Organising projects
In your project, you may need multiple, isolated packages. Here is the most common project layout in found and I used in all my projects :
layout -- top project directory
├── build -- all things related to ci
│ └── ci
│ └── ci.yml
├── cmd -- main package, entrypoing
│ └── layout.go
├── Dockerfile -- container build instructions
├── init -- if usefull systemd service file
│ └── layout.service
├── internal -- private packages, ie not to be used in another project
│ └── private
│ └── private.go
├── LICENSE -- license file
├── pkg -- public packages, ie packages like libs, to be importer in another project
│ └── public
│ └── public.go
└── README.md -- readme
Using go mod
Init project
go mod init github.com/username/project
Add an external dep to your project
Import the required package
import "github.com/example/lib"
The next time you will run a go command it will fetch the package, eg :
go build
In repository packages
Of course, you can also use isolated package you write inside your repository (usefull if you need to split things up)
Let's say, for example, I want to add a new public package foo
in my project,
mkdir -p pkg/foo && echo "package foo" > pkg/foo/foo.go
After adding stuff in your foo.go
file, import it using
import "github.com/username/project/foo"
Testing tips, golang edition
Files
Test file should be place in the package directory and should be name using the following convention :
- repo : github.com/go/pkg
- package : github.com/go/pkg/example
- package file : example/pkg.go
- test file : exemple/pkg_test.go
Run
go test github.com/go/pkg/package
fmt.Println is not working
Gniagniagnia, use
t.Log()
or
t.Logf()
also,
go test github.com/go/pkg/package -v
How to fail
Mark test as failed (next tests executed)
t.Fail()
Mark test as failed AND exit
t.FailNow()
Print and mark test as failed
t.Error()
or
t.Errorf()
Print, mark test as failed AND exit
t.Fatal()
or
t.Fatalf()
I don't want my tests to be messy (kudos @athoune)
Shit, i want to disable my oauth test on my CI
if os.Getenv("DRONE") == "true" {
t.Skip("Skipping test in CI environment")
}
Short mode
This test is too long ? Skip it !
if testing.Short() {
t.Skip("Skipping test in short mode")
}
go test github.com/go/pkg/package --short
Cleanup test cache
go clean -testcache
Elixir
Lamba or Anonymous Functions
Create an anonymous function and bind it to a variable
Simple one
iex> func = fn -> IO.puts("Hello") end
#Function<21.126501267/0 in :erl_eval.expr/5>
iex> func.()
Hello
:ok
One with arguments
iex> func = fn t -> IO.puts(t) end
#Function<7.126501267/1 in :erl_eval.expr/5>
iex> func.("Hello")
Hello
:ok
Another solution is the &
operator used as syntastic sugar
iex> func = &(&1 + &2)
&:erlang.+/2
iex> func.(2, 2)
4
GenServer
How to create a scheduled job (kudos href)
Code
defmodule Jobs do
# Jobs module is based on GenServer
use GenServer
# Init with `init` as initial value, then continue
def init(init) do
# call to handle_continue
{:ok, init, {:continue, :work}}
end
# Exec job on continue, then reschedule
def handle_continue(:work, state) do
{:noreply, work_then_reschedule(state)}
end
# Handle info and pass it to continue
def handle_info(:work, state) do
{:noreply, state, {:continue, :work}}
end
# Get timer from config.exs
def get_timer_config() do
{:ok, timer} = Application.fetch_env(:app, :timer)
timer
end
# Do the important stuff
defp work_then_reschedule(state) do
# Modify state
state = state + 1
IO.puts(state)
IO.puts("Work, then reschedule !")
# Reschedule, later
Process.send_after(self(), :work, get_timer_config() * 1000)
# Return updated state
state
end
end
Usage
iex> {:ok, pid} = GenServer.start_link(Jobs, 1)
{:ok, #PID<0.251.0>}
Dates
When working with dates in Elixir, nerver use "<" or ">", because sometimes, it will not work the way you want
iex(16)> first = Timex.parse!("Tue, 29 Oct 2019 16:00:00 +0000", "%a, %d %b %Y %H:%M:%S %z", :strftime)
#DateTime<2019-10-29 16:00:00+00:00 GMT Etc/GMT+0>
iex(17)> second = Timex.parse!("Wed, 06 Nov 2019 23:00:00 +0000", "%a, %d %b %Y %H:%M:%S %z", :strftime)
#DateTime<2019-11-06 23:00:00+00:00 GMT Etc/GMT+0>
iex(18)> first < second
false
Timex
For example, with Timex
comes the compare
function.
Timex.compare(first, second)
-1
And return value should be considered as follow :
- -1 : first is before second
- 0 : first is equal to second
- 1 : second is before first
Rust
serde
Serde is rust create used to serialize and deserialize stuff.
For example, this can be used to deserialize toml into a dedicated struct.
Example, with toml
Here is how serde after version 1.0 (included) should be used with serialize/deserialize mechanisms.
Be careful, there is breaking changes before version 1.0 and you can find confusing docs.
main.rs
file
// uses use serde::Deserialize; #[derive(Deserialize)] struct Config { name: String, url: String, } fn main() { let content = r#"name = "example" url = "https://example.com""#; let conf: Config = toml::from_str(content).unwrap(); println!("name: {}, url: {}", conf.name, conf.url) }
Cargo.toml
file
[package]
name = "safiste"
version = "0.1.0"
authors = ["El Famoso Safiste"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
serde = { version = "1.0.94", features = ["derive"] }
toml = "0.5.1"
Output
name: example, url: https://example.com
Testing
Cargo test
and println!
To see stdout when using cargo run (kudos CobaltVelvet)
cargo test -- --nocapture
Run all the tests, even if one of them fail
cargo test -- --no-fail-fast
Ruby
Playing with arrays
Check if all array elements are equals
arr = [0, 0, 0, 0, 0, 0, 0]
puts true if arr.uniq.length == 1
Array includes a sub array
(subarray & array) == array
Compute values occurences
[1, 1, 2, 3, 4, 5, 5, 5, 5].tally
=> {1=>2, 2=>1, 3=>1, 4=>1, 5=>4}
Reduce vs Each With Object
Both methods serve the same purpose : from an Enumerator create a single value out of it
Reduce
Reduce is prefered when you need to produce a simple value because it reduce over the returned value
[1, 2, 3].reduce(:+)
> 3
We can illustrate this simply with this snippet :
[1, 2, 3].reduce do |acc, v|
acc += v
0
end
> 0
Each With Object
Each with object is prefered when you reduce on a hash or some sort of complex object beacause is use the accumulator value and not the returned one
[1, 2, 3].each_with_object({ sum: 0 }) do |v, acc|
acc[:sum] += v
0
end
> {sum: 6}
Fun fact, reduce
takes |acc, v|
while each_with_object
take |v, acc|
Interfaces in TypeScript
Overview
Interface in TypeScripts are used to create a blueprint or an abstraction of a class.
An interface can contains properties (mandatory, optional or readonly)
interface Example {
mandatory: string;
optional?: string;
readonly ro: string;
}
And methods
interface Example {
mandatory: string;
optional?: string;
concat(sep: string): string;
}
To use the interface, a class should implement it,
class Impl implements Example {
mandatory: string;
optional?: string;
constructor(m: string, o: string) {
this.mandatory = m;
this.optional = o;
}
contact(sep: string): string {
return `${this.mandatory}${sep}${this.optional}`;
}
}
The Promise case
What about a promise based method ? Do not forget what async/await does. It wraps all the things in Promise<T>
.
In order to add promise to your interface, just specify a Promise as return type.
interface Example {
mandatory: string;
optional?: string;
promize(arg: string): Promise<string>;
}
Decorators in TypeScript
Overview
Decorators are an experimental feature of TypeScript (and also a JavaScript stage 2 feature, meaning they will be soon included in standard JS) allowing you to inject specific behaviors to classes, properties, methods, accessors or parameters.
This features allow some kind of meta-programming and dependency injection, called at runtime.
This is mainly used in libs to add specific behaviors to your own code.
For example, TypeORM, an ORM lib, use this feature to give a nice way for
users to annotate their models, the dedicated char to use decorator is @
:
@Entity()
export class Person {
@PrimaryGeneratedColumn()
id: number;
@Column()
lastname: string;
}
Example
This features needs to be explicitly set as enabled in your tsconfig.json
:
"experimentalDecorators": true
Decorators are just functions, for example, here is a property decorator
class Decorator {
// Call to the emoji decorator with a string passed as argument
@emoji("🦍")
name: string = "";
constructor(name: string) {
this.name = name;
}
}
// The actual decorator code, this is, in fact a decorator factory
// It's a high order function returning the actual decorator
// It's a common and nice way to have access to a larger scope to
// play with the args passed as params (emojo, here)
function emoji(emojo: string) {
// Return the actual decorator
return function (target: any, key: string) {
// get the actual value
let val = target[key];
// customize getter
const getter = () => {
return val;
}
// and setter, to add some nice emojos
const setter = (next: string) => {
val = ${emojo} ${next} ${emojo};
}
// Apply thoose changes to the actual object property
Object.defineProperty(target, key, {
get: getter,
set: setter,
enumerable: true,
configurable: true
});
}
}
const example = new Decorator("hello");
console.log(example.name);
Even if this example is quite useless, it gives an overview of the possibilities of this feature.
More ressources can be found in the Handbook
js
Presentation of the main ES6 features
Spread Operator
It allows an iterable to expand items in it.
array = [1, 2];
appended = [3,4];
all = [...aray, ...appended]
console.log(all)
This can be usefull in some situations like this one :
array = [1, 2, 3, 4];
console.log(Math.min(...array));
instead of the old syntax
array = [1, 2, 3, 4];
console.log(Math.min.apply(null, array));
Rest Operator
Represent infinite number or args in an array
const sum = (...numbers) => {
return numbers.reduce((previous, current) => {
return previous + current;
});
};
Arrow Function
Function can now be written like this
const f = (a, b) => a + b;
console.log(f(1,2));
instead of
function f(a, b) {
return a + b;
}
console.log(f(1,2));
Default parameters
With arrow syntax !
const f = (a = 10) => console.log(a);
f();
f(1);
Destructuring (arrays and objects)
With arrays
const array = [1, 2];
const [a, b] = array;
console.log(a, b)
or objects
const person = { name: "Jesus", age: "33" };
const { name, age } = person;
console.log(name, age);
Web
front
Babel and babel-preset-stage-2
If you want use the "Spead Operator" eg ...
in javascript, you need babel-preset-stage-2
.
yarn add -D babel-preset-stage-2
Then enable it your babelrc :
{
"presets": [
["es2015", { "modules": false }],
["stage-2"]
]
}
JVM
Java
Defaults
Java uses int
(64 bits) as default type for integer numbers
Java uses double
as default type for floating point number (double
precision float
precision)
Regex
Non capturing group
- 8 cats
- 9 dogs
- 10 cows
Match and capture ONLY the number of cats and dogs using ?:, for non capturing groups
(\d+) (?:cats|dogs)
Tools
Git
Fetch a remote branch
- Create a local branch that tracks a remote branch
git checkout --track origin/develop
If you want to change the name of the local branch (NOT A GOOD IDEA)
git checkout --track -b gniagnia origin/develop
Amend without editing commit message
git commit --amend --no-edit
Copy current HEAD sha sum to clipboard
git rev-parse HEAD | xclip
Conditional Git configuration
Since git 2.13, there is an option to include configuration (or not) based on a condition
Here is an example use case
# Specific configurations for work and personnal repos
[includeIf "gitdir:~/code/work/"]
path = .gitconfig.work
[includeIf "gitdir:~/code/oss/"]
path = .gitconfig.oss
If include, settings in any configuration found file provided by the path parameter will by append to or will overide current configuration.
Switch to previous branch
git checkout -
SVN
Commit only selected files
svn commit -m "Message" README src/example/script.pl
Ops
k8s
kubectl
kubectl is the main command line tool for k8s. See reference
Raw commands
Sometimes it's handy to just run a container without all the yaml mess.
Running a container
kubectl run --image=IMAGE name
apply is not always the answer
Even if you see kubectl apply
everywhere sometimes it's handy to use dedicated commands
For example, kubectl create
is used to create a ressource and will throw an error if a ressource already exists.
Imperative commands quick reference
Do not forget that --dry-run=client
can be usefull
Create a simple pod
kubectl run nginx --image=nginx
Create a deployement
kubectl create deployment --image=nginx nginx
Gotcha, deployement do not have a --replicas
option, you need to use kubctl scale
after deployement creation
Generating a manifests, blazzing fast way
kubectl run app --image=app --dry-run=client -o yaml
kubectl create deployment --image=redis redis --dry-run=client -o yaml
Of course, if you need to write it into a file
kubectl create deployment --image=redis redis --dry-run=client -o yaml > deploy-manifest.yml
Switching namespaces contexts
kubectl config set-config $(kubectl config current-context) --namespace=target
Cluster Upgrade
Versions
vMajor.Minor.Patch
semantic versioning
K8S compatibility is as follow : current and n-2 version are supported.
Always upgrade minor to minor releases.
kubeadm
Check latest/upgradable version
kubeadm upgrade plan
Master node upgrade
Drain and mark it at Unschedulable
kubectl drain master
kubectl get nodes
to verify that master node is unschedulable
kubeadm update
apt install kubeadm=X.Y.Z-00
kubeadm version
to ensure version number
Node update
kubeadm upgrade apply vX.Y.Z
kubectl version
to validate
Update kubelet
apt install kubelet=X.Y.Z-00
kubectl get nodes
to validate
Mark master node as schedulable
kubectl uncordon master
Worker nodes upgrade
Repeat the same actions done on master for each worker node
🎉
macOS
iTerm
Sync sessions
Toogle the feature
⌘ + ⇧ + i
Will toggle sync and input will be sent to all sessions
GNU/Linux
Debian
Upgrade to latest stable
Enure backups !
Ensure everything is up to date
apt update
apt upgrade
apt dist-upgrade
Check package db consistency
dpkg -C
Check hold packages
Hold packages will not be upgraded, ensure there is no deps shit
apt-mark showhold
If there is anything, check with dpkg --audit
Update source list
Check
sed s/$OLDSTABLE/$STABLE/g /etc/apt/sources.list
# for exemple
sed s/stretch/buster/g /etc/apt/sources.list
Is it's ok, go
sed -i s/stretch/buster/g /etc/apt/sources.list
Do not forget stuff in source.list.d
find /etc/apt/sources.list.d/ -type f -exec sed -i s/stretch/buster/g {} \;
Simulate
apt update
apt list --upgradable
Go
apt update
apt full-upgrade
Clean
apt autoclean
Enjoy !
Shell Tips
Find stuff eating all SWAP (kudos gardouille)
find /proc -maxdepth 2 -type f -iname status -exec grep -HR VmSwap {} \; | grep -v " 0 kB"
Tools
Curl
Add form-file in a POST request
curl -F "field-name=@/my/path" http://example.com
POST is implicit because of -F
.
Ansible
List all tags
ansible-playbook -i host.targets -v site.yml --list-tags
Start at a specific task (life savior)
ansible-playbook -i host.targets -v site.yml --start-at-task "The audacious task"
List hosts
ansible-playbook -i host.targets -v site.yml --list-hosts
Limit hosts
ansible-playbook -i host.targets -v site.yml --limit hostname
Jinja2, templates & carriage return
To tell Jinja2 to not mess with carriage return in templates add
#jinja2: trim_blocks:False
---
at the top of the template file
Import vs include (kudos @href)
If you want to exec a sets of tasks when a condition is true, use import_tasks.
- name: Include init tasks
import_tasks: init.yml
tags:
- init
when: proof.stat.exists == False
ìnclude_tasks
will add all tasks to play run, even if the when condition is
false.
Exec task(s) if a specific service is found
- name: Is Docker running ?
service_facts:
- name: Push Telegraf docker input config if needed
template:
src: inputs/input.docker.conf.j2
dest: /etc/telegraf/telegraf.d/input.docker.conf
notify: reload telegraf
when: "'docker' in services"
Docker
Inspect
Get container ip
docker container inspect --format='{{range .NetworkSettings.Networks }}{{ .IPAddress }}{{ end }}' upbeat_babbage
MongoDB
Non interactive conn from CLI (kudos lukhas)
Docs says
mongo --username user --password 'passwd'
Reality says
mongo --username user --password='passwd'
OpenSSL
Checking
CSR
openssl req -text -noout -verify -in request.csr
Key
openssl rsa -in privkey.key -check
Cert
openssl x509 -in certficate.crt -text -noout
Tmux
El famoso terminal multiplexer (sorry not sorry but screen is a piece of shit).
Sync panels
Sync panels can be quite usefull to do the same tasks on a different set of servers.
tmux.conf
file
[...]
unbind S
bind S set-windows-option synchronize-panes
[...]
How to use it
Typical use case can be updating let's say 3 Debian servers.
- start tmux
- split in 3, use ssh to connect to the targets1.
- use
bind prefix + S
to activate sync2 apt update && apt upgrade
- And voilà !
a dedicated session can be used to automate stuff, more on that later...
by default, Ctrl + b
Misc
Archlinux Urgent Chroot
In case of failure, here is the savior :
loadkeys fr
cryptsetup luksOpen /dev/nvme0n1p5 luks
opts=defaults,noatime,nodiratime,ssd,compress=zstd
mount -o $opts,subvol=@ /dev/mapper/vg-arch /mnt
mount -o $opts,subvol=@/home /dev/mapper/vg-arch /mnt/home
mount -o $opts,subvol=@/var /dev/mapper/vg-arch /mnt/var
mount -o $opts,subvol=@/var/log /dev/mapper/vg-arch /mnt/var/log
mount -o $opts,subvol=@/var/cache/pacman/pkg /dev/mapper/vg-arch /mnt/var/cache/pacman/pkg
mount /dev/sdX1 /mnt/boot
arch-chroot /mnt