使用Fuzzilli测试JS引擎

使用Fuzzilli测试JS引擎

Webkit(JSC)篇

准备Webkit

首先通过命令

git clone https://github.com/Webkit/Webkit.git Webkit

下载源码

一共大概需要下载13个G的文件,所以需要把httpBuffer改大一点,我这个直接改成了5242880000,命令就是

git config --global http.postBuffer 524288000

clone完成以后还需要经历一个漫长的updating 的过程,据说需要2-3天,但updating一次以后,再次clone的时候就不会经历这么长的时间了,我就直接找了组里下过的同学,帮忙下载了一份。

然后是根据Fuzzilli的文件打patch,Fuzzilli官方已经发布了JSC的patch,但是只能对特定的版本使用,查看一下fuzzilli/Targets/JavaScriptCore/REVISION中的commit号(我的版本使用的是4110e1b44a345737cdb807d36572c8714e90c5d0

根据指导在Webkit根目录下使用命令

git checkout 4110e1b44a345737cdb807d36572c8714e90c5d0 -b fuzz

进行checkout,大概十几分钟后会显示已经创建了fuzz分支

然后先在webkit根目录下将fuzzilli/Targets/JavaScriptCore/Patches路径下面的webkit.patch拷贝过来,注意cp命令最后有个点,最开始以为是屏幕脏了没注意(xs

拷贝过来以后打上patch就可以了

1
2
$ cp  path/to/fuzzilli/Targets/JavaScriptCore/Patches/webkit.patch .
$ patch -p1 < webkit.patch

成功后会显示

1

然后再将path/to/fuzzilli/Targets/JavaScriptCore目录底下的fuzzbuild.sh也拷贝过来,执行./fuzzbuild开始编译JSC

编译过程中可能会报这样一个错,说是找不到编译器,是因为fuzzbuild.sh默认使用clang-12编译,但我的机器里只有clang-11,直接把fuzzbuild.sh里面的clang-12改成clang-11就可以了

2

改完之后继续执行./fuzzbuild编译,编译过程中会报几个warning,不用去管,编译一共大约需要十分钟的样子,编译完这一步就完成了。

准备Fuzzilli

接下来要准备Fuzzilli

还是直接从github上面下载

git clone https://github.com/googleprojectzero/fuzzilli.git

由于fuzzilli需要使用swift进行编译,所以我们接下来下载swift依赖

首先从swift官网下载对应Ubuntu版本的swift

3

比如我使用的是Ubuntu20.04就使用

wget https://download.swift.org/swift-5.6.1-release/ubuntu2004/swift-5.6.1-RELEASE/swift-5.6.1-RELEASE-ubuntu20.04.tar.gz

下载了5.6.1版本的swift,具体可以去swift官网查看自己适用的版本

下载完成后使用tar -zxvf 文件名解压

如果在解压时遇到这个问题

4

可能是因为下载了一个网页下来,可以cat一下这个压缩文件,如果显示了html代码,就需要重新下载一下

正常解压大概需要几分钟

5

解压完成后进入swift-5.6.1-RELEASE-ubuntu20.04/usr/bin目录下,有一个叫swift的可执行文件就是用来编译fuzzilli的

然后我们记住swift所在的路径,进入fuzzilli的根目录

使用path/to/swift build对fuzzilli进行编译,其中path/to/swift用自己的swift路径替换

开始编译之后可能会有几个报错,说某个仓库是不安全的,这个他会告诉你运行什么命令来信任,复制执行就行了

6

执行后重新build,大概十几分钟的样子就能编译好了

7

fuzzilli/.build/debug路径下有一个FuzzilliCli的文件就是我们要使用的fuzzilli的可执行文件了

开始fuzz

首先我们先使用命令看一下fuzzilli的参数

path/to/swift run FuzzilliCli --help

参数还是很多的,挑几个了解一下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
Options:
#被测引擎的名字
--profile=name : Select one of several preconfigured profiles.
Available profiles: ["spidermonkey", "duktape", "jerryscript", "qjs", "v8", "xs", "jsc"].
#创建用来fuzz的线程数
--jobs=n : Total number of fuzzing jobs. This will start one master thread and n-1 worker threads. Experimental!
--engine=name : The fuzzing engine to use. Available engines: "mutation" (default), "hybrid", "multi".
Only the mutation engine should be regarded stable at this point.
--corpus=name : The corpus scheduler to use. Available schedulers: "basic" (default), "markov"
--minDeterminismExecs=n : The minimum number of times a new sample will be executed when checking determinism (default: 3)
--maxDeterminismExecs=n : The maximum number of times a new sample will be executed when checking determinism (default: 50)
--noDeterministicCorpus : Don't ensure that samples added to the corpus behave deterministically.
--maxResetCount=n : The number of times a non-deterministic edge is reset before it is ignored in subsequent executions.
Only used as part of --deterministicCorpus.
--logLevel=level : The log level to use. Valid values: "verbose", info", "warning", "error", "fatal"
(default: "info").
--numIterations=n : Run for the specified number of iterations (default: unlimited).
--timeout=n : Timeout in ms after which to interrupt execution of programs (default: 250).
--minMutationsPerSample=n : Discard samples from the corpus after they have been mutated at least this
many times (default: 16).
#至少在样本变异多少次后才丢弃样本(default: 16)
--minCorpusSize=n : Keep at least this many samples in the corpus regardless of the number of times
they have been mutated (default: 1024).
--maxCorpusSize=n : Only allow the corpus to grow to this many samples. Otherwise the oldest samples
will be discarded (default: unlimited).
--markovDropoutRate=n : Rate at which low edge samples are not selected, in the Markov Corpus Scheduler,
per round of sample selection. Used to ensure diversity between fuzzer instances
(default: 0.10)
--consecutiveMutations=n : Perform this many consecutive mutations on each sample (default: 5).
--minimizationLimit=n : When minimizing corpus samples, keep at least this many instructions in the
program. See Minimizer.swift for an overview of this feature (default: 0).
--storagePath=path : Path at which to store output files (crashes, corpus, etc.) to.
#存储运行时产生的文件(crashes, corpus, etc)
--resume : If storage path exists, import the programs from the corpus/ subdirectory
--overwrite : If storage path exists, delete all data in it and start a fresh fuzzing session
--exportStatistics : If enabled, fuzzing statistics will be collected and saved to disk every 10 minutes.
Requires --storagePath.
--importCorpusAll=path : Imports a corpus of protobufs to start the initial fuzzing corpus.
All provided programs are included, even if they do not increase coverage.
This is useful for searching for variants of existing bugs.
Can be used alongside wtih importCorpusNewCov, and will run first
#在启动fuzz之前导入现有语料库
--importCorpusNewCov=path : Imports a corpus of protobufs to start the initial fuzzing corpus.
This only includes programs that increase coverage.
This is useful for jump starting coverage for a wide range of JavaScript samples.
Can be used alongside importCorpusAll, and will run second.
Since all imported samples are asynchronously minimized, the corpus will show a smaller
than expected size until minimization completes.
--importCorpusMerge=path : Imports a corpus of protobufs to start the initial fuzzing corpus.
This only keeps programs that increase coverage but does not attempt to minimize
the samples. This is mostly useful to merge existing corpora from previous fuzzing
sessions that will have redundant samples but which will already be minimized.
--networkMaster=host:port : Run as master and accept connections from workers over the network. Note: it is
*highly* recommended to run network fuzzers in an isolated network!
--networkWorker=host:port : Run as worker and connect to the specified master instance.
--dontFuzz : If used, this instace will not perform fuzzing. Can be useful for master instances.
--noAbstractInterpretation : Disable abstract interpretation of FuzzIL programs during fuzzing. See
Configuration.swift for more details.
--collectRuntimeTypes : Collect runtime type information for programs that are added to the corpus.
--diagnostics : Enable saving of programs that failed or timed-out during execution. Also tracks
executions on the current REPRL instance.
--inspect=opt1,opt2,... : Enable inspection options. The following options are available:
history: Additional .fuzzil.history files are written to disk for every program.
These describe in detail how the program was generated through mutations,
code generation, and minimization
types: Programs written to disk also contain variable type information as
determined by Fuzzilli as comments
all: All of the above

然后直接按官方readme里面的命令跑就好

注意如果你的目录在/下,要把展开,就是用/home+…的形式,否则无法识别

8

Gecko-dev(SpiderMonkey)篇

准备Gecko-dev

老规矩,还是从官方的GitHub上面clone就可以,这个时间会比Webkit短很多,都完成的话也就是几个小时就可以,不想一直看着命令行发呆的话挂后台等着就行

按照fuzzilli的Targets/SpiderMonkey下面的readme文件所说的,先checkout,我使用的版本是73330bf7355c0aef844c41d0d7eed2848e53c82f,等两分钟更新完以后从根目录进入到js/src目录底下,把fuzzilli/Targets/SpiderMonkey底下的fuzzbuild.sh复制过来后执行

执行的时候可能会报一个错,大概是说找不到rustc和cargo编译器,如果有这个问题参照这篇博客下载一下就可以了

安装完以后,再执行一遍fuzzbuild.sh,大概等5分钟左右就编译好了

image-20220423191320995

编译好之后,安装fuzzilli的说明,可执行文件应该在fuzzbuild_OPT.OBJ/dist/bin/这个目录下,这个目录不在gecko-dev的根目录下,而是在刚刚进入的js/src这个目录底下,我们顺着路径去找一下就能找到了

image-20220423191641494

开始fuzz

前两天的JSC还没测完,这个部分过两天再写吧

  • Copyright: Copyright is owned by the author. For commercial reprints, please contact the author for authorization. For non-commercial reprints, please indicate the source.
  • Copyrights © 2021-2024 Kery
  • Visitors: | Views:

请我喝杯咖啡吧~

支付宝
微信