Lajos Gerecs's blog Home Blog Projects About me

Why are some markdown headings not converted to the proper html heading?

2021-02-09

You are in a frustrating situation, some headings marked with # are not getting
converted to their rendered HTML counterpart.

Some do get converted, it is quite confusing. You retype the header.
Now it works but you did not change anything.

If You try a different markdown renderer it may or may not have the same output,
making this issue hard to diagnose.

This issue for example does not exist in hexo but it does exist in hugo.

Solution

You are too fast. You are writing the header with the combination of
Option+3 or Alt+3 to type the #s, and then quickly hit Space, mistakenly hitting Option+Space.

This space is getting converted to a non-breaking space.

You can use https://dillinger.io to validate your markdown file, and it will
display the issue clearly:

Alternatively, we can check it with a hex heditor:

The solution is to replace the non-breaking space with a normal space, and slowing down. :)

Share
  • annoying
  • markdown

RabbitMQ Quorum Queues

2020-03-08

Hi!

Recently I published a blogpost about RabbitMQ’s new features, especially about Quorum Queues.

You can read it here:

https://www.erlang-solutions.com/blog/rabbitmq-quorum-queues-explained-what-you-need-to-know.html

Share

Restoring a dead MySQL database using the database files with Docker

2017-02-25

Lets say your server died and you want to export the latest database content from your MySQL instance.

Previously you had to install it into some virtual machine and copy your files to the MySQL data dir. Thanks to docker you can quite quickly do this now.

You will need all the files from the dead machine, it is usually in the /var/lib/mysql directory. Although some files are separated into the db named directories, all the files are necessary because the InnoDB engine uses some files from there. If you did not copy all the files then you will get the error below:

1
2
3
mysql> USE database_name;
mysql> SELECT * FROM table_name;
ERROR 1146 (42S02): Table 'database_name.table_name' doesn't exist

Now you have to get the correct version of MySQL running in a docker container. You can get the version by running the ./mysql --version command with the binary from the dead machine, for me it worked using a MySQL 5.6 instance with data from version 5.5.54.

You can use the docker image from the official repository: MySQL Docker. The command below will start a MySQL 5.6 instance:

1
2
3
export $PATH_TO_SAVED_DATA="/home/user/saved_mysql_data_dir"
export $MYSQL_VERSION="5.6"
docker run --name some-mysql -v "$PATH_TO_SAVED_DATA":/var/lib/mysql -d mysql:"$MYSQL_VERSION"

Note that even if you pass in a root password, it will not be used because we overwrite the MySQL data files, which means that the passwords will be the same as they were on the dead machine!

You can see in the command that we named our container ‘some-mysql’. This means even if we docker stop it we can not start it the same way, you have to docker rm some-mysql before restarting. Alternatively you can remove that part, then get the instance id by looking up it in docker ps.

You can do two things now:

1
docker exec -i -t some-mysql sh -c 'bash'

This will give you a console inside the machine. You can use any tools to recover the data. One thing you can not do is to stop the mysql because that will stop the container. You have to manually get and edit the Dockerfile to run mysqld_safe to reset your password.

If you just want to dump the data and you know your password then just use this command:

1
docker exec -i -t some-mysql sh -c 'mysqldump -u mysql_user -p --databases database_name'  > restore.sql

The command above will ask for a password then dump the SQL into the restore.sql file.

Share
  • docker
  • mysql

Why you should automate your side project?

2016-06-07

I am a robot

I have a few side-projects, mostly websites. They need to be maintained, the server has to be set up, new code has to be deployed. I thought automating stuff I only do once every full moon is unnecessary. This blog is the tell tale of the opposite. I decided to create a new blog around half a year ago but never had time to actually write something, or fix the design, or anything to do with it. What I did though is I automated the deploying of new blogposts, automated setting up the nginx configs, automated setting up the Lets Encrypt certificate. This means now that whenever I have time I can easily just sit down, do my thing, then deploy it with one command.

Automate server creation and deployment

Ansible server provision playbook thing

This will help when you decide to migrate your sideproject to another server. I found ansible is the most understandable tool for me, there are already a lot of playbooks for it, also you can easily modify them if you don’t like something. In the end you can just drop back to bash (but try to avoid that).
Ansible also can be used to automate deployment. Usually when you have a sideproject, like this blog, it’s a couple of files, or a java jar file, or similar. You can just write an ansible file to create the necessary directories and upload files, reload nginx, all kinds of stuff. When your project gets popular you can easily change.
Don’t forget to set up your ssh key on the servers so you don’t forget your password (and it’s more secure).

Automate development

Do you always have to start like three consoles to start the related services of your sideproject? For this blog it’s only the static site generator, but for other projects it is the db (or more dbs), node js for generating the frontend javascript, the backend server, something like play framework, or just vagrant. You can use tmux to start all services at once and keep a console on them.

Using tmux to start multiple consoles at the same time

You can use docker to contain all the necessary stuff for all of your components. This makes it portable, you don’t have to worry about accidentally deleting some crucial information which was needed to bootstrap your app (like a db schema which was in a locally installed database which got deleted on an OS reinstall).

You will not remember

There is nothing in development you only have to do once, and you will not remember how you did it last time. At least save it in a script deploy.sh.

Learn a lot

You can learn a lot by trying to understand how other types of developer/operation guys work. I always thought (and still kind of think) that server provisioning, deploying is some kind of arcane stuff, but in the end it’s all just files, processors, and memory, and…

Final world

If you don’t really have time for side projects that’s when you really have to reduce friction. Friction of development, friction of deploying. Everything. When I want to start something I want to start it immediately, not trying to figure out how I ran something half a year ago. I want to see my changes in production. I think you should always strive to do best practices when working on your own, in your free time, but you have to know when it is good enough. It depends on the goal why you are doing it. My sideprojects are mostly learning projects, therefore I strive to implement everything the best possible way I can imagine in the minute, but try not to get too caught up in this. Perfection is the enemy of done.

Share
  • devops
  • en

Solving the Gitcoin Problem - Stripe CTF 3

2016-02-14

Stripe, the payment provider, origanised a CTF, where you could test your knowledge of distributed systems. One of the tasks was to write a BitCooin Ledger like implementation based on git.

If you are not familiar with the git’s internal implementation, then you should know for this that git creates an object, written to a file, for every commit. This commit contains a sha hash of the contents you are storing plus the hash of the related information. This information contains the user, email, and commit message.

In bitcoin mining you always want to generate a hash which is less than some kind of constant. For example if you generate hash 123 and requirements contain 001 then your hash is not accepted as it’s not less than the required threshold.

In this tasks we had to generate an sha hash less than the required threshold which was 000001. If we accept that sha hashes are random then this task actaully says that we have to generate around 16 million hashes in the time they set us, which was a few minutes.

We had to write a gitcoin miner and we had an example miner which was really slow.

First I thought it would be good to use Scala/Akka to pass the Tasks around. My plan was to fill up an Actor system and let it handle the hashing using the sys.process package and some commands from the supplied bash example. For some reason it was a complete fail, the Actor got full and used up all my memory and never done anything, in the end I had to shoot this idea down (and the process too :) ).

After this I tried a simple parallel for and calling

1
git hash-object -t commit --stdin -w <<< commit-body

This has never used more than a couple of percent CPU, I don’t know why, maybe git locks the files in the repo?

After this I got the idea to rewrite the git hash-object part in scala, and in the end this was the solution which worked out.

If we visit the git objects documentation we can see that a commit contains five things:

1
2
3
4
5
tree d8329fc1cc938780ffdd9f94e0d364e0ea74f579
parent some-hash
author Scott Chacon <schacon@gmail.com> 1243040974 -0700
committer Scott Chacon <schacon@gmail.com> 1243040974 -0700
first commit (some description)

The first value is the tree, we can obtain this with git write-tree. The second value is the parent of the commit, we can obtain this with git rev-parse HEAD. The other things in this were just placeholders and the goal was to change the last part of the commit to get a hash with git hash-object which is lower than 000001xxxxxxxxxxxxxxxx…. This means that the hash must be lower than the string 000001. This is difficult because the only way to obtain the correct info to get the needed hash to try as many hash as we could in the shortest time possible.

Now we know what’s needed by the git hash-object function, we have to know what is the format used to calculate the hash. Turns out git prepends a commit (body-length)\0 string to the commit body, so thats what we have to do and we are good to go.

My code looked something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

val tree = Seq("git","write-tree").!!.trim
val parent = Seq("git","rev-parse","HEAD").!!.trim
val body = s"tree ${tree}\n" +
s"parent ${parent}\n" +
s"author CTF user <me@example.com> ${now} +0000\n" +
"""committer CTF user <me@example.com> 1390941203 +0000
Give me a Gitcoin
"""
var eq = 0
val max = 100000000
for( i <- (0 to max).par) {
val bodyCurrent = body + i + "\n"
val fullBody = "commit " + bodyCurrent.length + "\0" + bodyCurrent
val hash = digest(fullBody).reverse.padTo(40,"0").reverse.mkString

if (hash < difficulty){
println("-----------------------")
println(fullBody)
println("-----------------------")
val cmd = "git hash-object -t commit --stdin -w <<< \""+bodyCurrent.trim()+"\";git reset --hard \""+hash+"\" < /dev/null; git push origin master;"
println(cmd)
println("-----------------------")
println("HASH: "+hash)
println("GITHASH: "+ commitHash(cbode))
//System.exit(0)
}
}

I imported the scala.sys.process._ methods/packages to easily run commands. You can see the syntax in the first couple of rows. After assembling the body of the commit I started a parallel for to use my all my CPU cores ( 2. gen mobile core i5).

Timing my miner, hashing 100000000 commits:

1
2
../scala-2.10.3/bin/scala ../Scalaminer.scala
1344,73s user 20,31s system 355% cpu 6:23,53 total

Turns out its approximately 261 kHash/sec. I don’t know if it counts as fast, it was enough to get a winner commit.

Here is the gist of the code with the remnants of previous tries: https://gist.github.com/luos/f2c8098be3ef0e49cee9

In the end I committed by hand because there was some trouble with the endline character and sometimes it was needed at the end of the commit, sometimes not. Strange.

Share
  • bitcoin
  • en
  • scala

This is a test post, please ignore

2016-01-10

This is a test post

Share

About

Contact Github LinkedIn Sitemap

Stuff

Blog Projects Vackor Labs