分类 系统安全 下的文章

osquery源码解读之分析shell_history

说明

前面两篇主要是对osquery的使用进行了说明,本篇文章将会分析osquery的源码。本文将主要对shell_historyprocess_open_sockets两张表进行说明。通过对这些表的实现分析,一方面能够了解osquery的实现通过SQL查询系统信息的机制,另一方面可以加深对Linux系统的理解。

表的说明

shell_history是用于查看shell的历史记录,而process_open_sockets是用于记录主机当前的网络行为。示例用法如下:

shell_history

osquery> select * from shell_history limit 3;
+------+------+-------------------------------------------------------------------+-----------------------------+
| uid  | time | command                                                           | history_file                |
+------+------+-------------------------------------------------------------------+-----------------------------+
| 1000 | 0    | pwd                                                               | /home/username/.bash_history |
| 1000 | 0    | ps -ef                                                            | /home/username/.bash_history |
| 1000 | 0    | ps -ef | grep java                                                | /home/username/.bash_history |
+------+------+-------------------------------------------------------------------+-----------------------------+

process_open_socket显示了一个反弹shell的链接。

osquery> select * from process_open_sockets order by pid desc limit 1;
+--------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+------------+---------------+
| pid    | fd | socket   | family | protocol | local_address | remote_address | local_port | remote_port | path | state      | net_namespace |
+--------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+------------+---------------+
| 115567 | 3  | 16467630 | 2      | 6        | 192.168.2.142 | 192.168.2.143  | 46368      | 8888        |      | ESTABLISH  | 0             |
+--------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+------------+---------------+

osquery整体的代码结构十分地清晰。所有表的定义都是位于specs下面,所有表的实现都是位于osquery/tables

我们以shell_history为例,其表的定义是在specs/posix/shell_history.table

table_name("shell_history")
description("A line-delimited (command) table of per-user .*_history data.")
schema([
    Column("uid", BIGINT, "Shell history owner", additional=True),
    Column("time", INTEGER, "Entry timestamp. It could be absent, default value is 0."),
    Column("command", TEXT, "Unparsed date/line/command history line"),
    Column("history_file", TEXT, "Path to the .*_history for this user"),
    ForeignKey(column="uid", table="users"),
])
attributes(user_data=True, no_pkey=True)
implementation("shell_history@genShellHistory")
examples([
    "select * from users join shell_history using (uid)",
])
fuzz_paths([
    "/home",
    "/Users",
])s

shell_history.table中已经定义了相关的信息,入口是shell_history.cpp中的genShellHistory()函数,甚至给出了示例的SQL语句select * from users join shell_history using (uid)shell_history.cpp是位于osquery/tables/system/posix/shell_history.cpp中。

同理,process_open_sockets的表定义位于specs/process_open_sockets.table,实现位于osquery/tables/networking/[linux|freebsd|windows]/process_open_sockets.cpp。可以看到由于process_open_sockets在多个平台上面都有,所以在linux/freebsd/windows中都存在process_open_sockets.cpp的实现。本文主要是以linux为例。

shell_history实现

前提知识

在分析之前,介绍一下Linux中的一些基本概念。我们常常会看到各种不同的unix shell,如bash、zsh、tcsh、sh等等。bash是我们目前最常见的,它几乎是所有的类unix操作中内置的一个shell。而zsh相对于bash增加了更多的功能。我们在终端输入各种命令时,其实都是使用的这些shell。

我们在用户的根目录下方利用ls -all就可以发现存在.bash_history文件,此文件就记录了我们在终端中输入的所有的命令。同样地,如果我们使用zsh,则会存在一个.zsh_history记录我们的命令。

同时在用户的根目录下还存在.bash_sessions的目录,根据这篇文章的介绍:

A new folder (~/.bash_sessions/) is used to store HISTFILE’s and .session files that are unique to sessions. If $BASH_SESSION or $TERM_SESSION_ID is set upon launching the shell (i.e. if Terminal is resuming from a saved state), the associated HISTFILE is merged into the current one, and the .session file is ran. Session saving is facilitated by means of an EXIT trap being set for a function bash_update_session_state.

.bash_sessions中存储了特定SESSION的HISTFILE和.session文件。如果在启动shell时设置了$BASH_SESSION$TERM_SESSION_ID。当此特定的SESSION启动了之后就会利用$BASH_SESSION$TERM_SESSION_ID恢复之前的状态。这也说明在.bash_sessions目录下也会存在*.history用于记录特定SESSION的历史命令信息。

分析

QueryData genShellHistory(QueryContext& context) {
    QueryData results;
    // Iterate over each user
    QueryData users = usersFromContext(context);
    for (const auto& row : users) {
        auto uid = row.find("uid");
        auto gid = row.find("gid");
        auto dir = row.find("directory");
        if (uid != row.end() && gid != row.end() && dir != row.end()) {
            genShellHistoryForUser(uid->second, gid->second, dir->second, results);
            genShellHistoryFromBashSessions(uid->second, dir->second, results);
        }
    }

    return results;
}

分析shell_history.cpp的入口函数genShellHistory():

遍历所有的用户,拿到uidgiddirectory。之后调用genShellHistoryForUser()获取用户的shell记录genShellHistoryFromBashSessions()genShellHistoryForUser()作用类似。

genShellHistoryForUser():

void genShellHistoryForUser(const std::string& uid, const std::string& gid, const std::string& directory, QueryData& results) {
    auto dropper = DropPrivileges::get();
    if (!dropper->dropTo(uid, gid)) {
        VLOG(1) << "Cannot drop privileges to UID " << uid;
        return;
    }

    for (const auto& hfile : kShellHistoryFiles) {
        boost::filesystem::path history_file = directory;
        history_file /= hfile;
        genShellHistoryFromFile(uid, history_file, results);
    }
}

可以看到在执行之前调用了:

auto dropper = DropPrivileges::get();
if (!dropper->dropTo(uid, gid)) {
    VLOG(1) << "Cannot drop privileges to UID " << uid;
    return;
}

用于对giduid降权,为什么要这么做呢?后来询问外国网友,给了一个很详尽的答案:

Think about a scenario where you are a malicious user and you spotted a vulnerability(buffer overflow) which none of us has. In the code (osquery which is running usually with root permission) you also know that history files(controlled by you) are being read by code(osquery). Now you stored a shell code (a code which is capable of destroying anything in the system)such a way that it would overwrite the saved rip. So once the function returns program control is with the injected code(shell code) with root privilege. With dropping privilege you reduce the chance of putting entire system into danger.

There are other mitigation techniques (e.g. stack guard) to avoid above scenario but multiple defenses are required

简而言之,osquery一般都是使用root权限运行的,如果攻击者在.bash_history中注入了一段恶意的shellcode代码。那么当osquery读到了这个文件之后,攻击者就能够获取到root权限了,所以通过降权的方式就能够很好地避免这样的问题。

/**
* @brief The privilege/permissions dropper deconstructor will restore
* effective permissions.
*
* There should only be a single drop of privilege/permission active.
*/
virtual ~DropPrivileges();

可以看到当函数被析构之后,就会重新恢复对应文件的权限。

之后遍历kShellHistoryFiles文件,执行genShellHistoryFromFile()代码。kShellHistoryFiles在之前已经定义,内容是:

const std::vector<std::string> kShellHistoryFiles = {
    ".bash_history", ".zsh_history", ".zhistory", ".history", ".sh_history",
};

可以发现其实在kShellHistoryFiles定义的就是常见的bash用于记录shell history目录的文件。最后调用genShellHistoryFromFile()读取.history文件,解析数据。

void genShellHistoryFromFile(const std::string& uid, const boost::filesystem::path& history_file, QueryData& results) {
    std::string history_content;
    if (forensicReadFile(history_file, history_content).ok()) {
        auto bash_timestamp_rx = xp::sregex::compile("^#(?P<timestamp>[0-9]+)$");
        auto zsh_timestamp_rx = xp::sregex::compile("^: {0,10}(?P<timestamp>[0-9]{1,11}):[0-9]+;(?P<command>.*)$");
        std::string prev_bash_timestamp;
        for (const auto& line : split(history_content, "\n")) {
            xp::smatch bash_timestamp_matches;
            xp::smatch zsh_timestamp_matches;

            if (prev_bash_timestamp.empty() &&
                xp::regex_search(line, bash_timestamp_matches, bash_timestamp_rx)) {
                prev_bash_timestamp = bash_timestamp_matches["timestamp"];
                continue;
            }

            Row r;

            if (!prev_bash_timestamp.empty()) {
                r["time"] = INTEGER(prev_bash_timestamp);
                r["command"] = line;
                prev_bash_timestamp.clear();
            } else if (xp::regex_search(
                    line, zsh_timestamp_matches, zsh_timestamp_rx)) {
                std::string timestamp = zsh_timestamp_matches["timestamp"];
                r["time"] = INTEGER(timestamp);
                r["command"] = zsh_timestamp_matches["command"];
            } else {
                r["time"] = INTEGER(0);
                r["command"] = line;
            }

            r["uid"] = uid;
            r["history_file"] = history_file.string();
            results.push_back(r);
        }
    }
}

整个代码逻辑非常地清晰。

  1. forensicReadFile(history_file, history_content)读取文件内容。
  2. 定义bash_timestamp_rxzsh_timestamp_rx的正则表达式,用于解析对应的.history文件的内容。 for (const auto& line : split(history_content, "\n"))读取文件的每一行,分别利用bash_timestamp_rxzsh_timestamp_rx解析每一行的内容。
  3. Row r;...;r["history_file"] = history_file.string();results.push_back(r);将解析之后的内容写入到Row中返回。

自此就完成了shell_history的解析工作。执行select * from shell_history就会按照上述的流程返回所有的历史命令的结果。

对于genShellHistoryFromBashSessions()函数:

void genShellHistoryFromBashSessions(const std::string &uid,const std::string &directory,QueryData &results) {
    boost::filesystem::path bash_sessions = directory;
    bash_sessions /= ".bash_sessions";

    if (pathExists(bash_sessions)) {
        bash_sessions /= "*.history";
        std::vector <std::string> session_hist_files;
        resolveFilePattern(bash_sessions, session_hist_files);

        for (const auto &hfile : session_hist_files) {
            boost::filesystem::path history_file = hfile;
            genShellHistoryFromFile(uid, history_file, results);
        }
    }
}

genShellHistoryFromBashSessions()获取历史命令的方法比较简单。

  1. 获取到.bash_sessions/*.history所有的文件;
  2. 同样调用genShellHistoryFromFile(uid, history_file, results);方法获取到历史命令;

总结

阅读一些优秀的开源软件的代码,不仅能够学习到相关的知识更能够了解到一些设计哲学。拥有快速学习能⼒的⽩帽子,是不能有短板的。有的只是⼤量的标准板和⼏块长板。

使用osqueryd监控系统

0x01 说明

osquery初识主要是借由osqueryi的方式对osquery进行了一个基本的介绍。可以看到osqueryi是一个交互式的shell,我们可以很方便使用它进行测试,但是如果我们要将osquery投入实际使用,明显是osqueryd更加合适。本篇文章将详细地介绍osqueryd的使用。

0x02 osqueryd配置

如果使用osqueryi,我们可以通过osqueryi -audit_allow_config=true --audit_allow_sockets=true --audit_persist=true这样的方式传入设置。如果是osqueryd呢?其实我们安装好osquery之后,会以service的方式存在于系统中,同时可以利用systemctl的方式进行控制,其文件位于/usr/lib/systemd/system/osqueryd.service

[Unit]
Description=The osquery Daemon
After=network.service syslog.service

[Service]
TimeoutStartSec=0
EnvironmentFile=/etc/sysconfig/osqueryd
ExecStartPre=/bin/sh -c "if [ ! -f $FLAG_FILE ]; then touch $FLAG_FILE; fi"
ExecStartPre=/bin/sh -c "if [ -f $LOCAL_PIDFILE ]; then mv $LOCAL_PIDFILE $PIDFILE; fi"
ExecStart=/usr/bin/osqueryd \
  --flagfile $FLAG_FILE \
  --config_path $CONFIG_FILE
Restart=on-failure
KillMode=process
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

启动方式就是ExecStart=/usr/bin/osqueryd --flagfile $FLAG_FILE --config_path $CONFIG_FILE,通过--flagfile--config_path的方式指定配置文件的路径。$FLAG_FILE和$CONFIG_FILE是在/etc/sysconfig/osqueryd中定义。

FLAG_FILE="/etc/osquery/osquery.flags"
CONFIG_FILE="/etc/osquery/osquery.conf"
LOCAL_PIDFILE="/var/osquery/osqueryd.pidfile"
PIDFILE="/var/run/osqueryd.pidfile"

默认的配置文件就是位于/etc/osquery/osquery.flags/etc/osquery/osquery.conf。当启动osqueryd时,如果不存在osquery.flagsosquery.conf会创建两个空文件,否则直接读取此文件的内容。其实osquery.conf可以认为是osquery.flags的超集,因为osquery.flags仅仅只是设置一些配置,而这些配置也同样可以在osquery.conf中实现,同时在osquery.conf中还可以配置osqueryd需要执行的SQL。所以接下来本文将仅仅只介绍osquery.conf的使用。

0x03 osquery.conf

osquery本身提供了一个osquery.conf的例子,其写法是一个JSON格式的文件,在这里我们将其简化一下。

{
  // Configure the daemon below:
  "options": {
    // Select the osquery config plugin.
    "config_plugin": "filesystem",

    // Select the osquery logging plugin.
    "logger_plugin": "filesystem",

    // The log directory stores info, warning, and errors.
    // If the daemon uses the 'filesystem' logging retriever then the log_dir
    // will also contain the query results.
    //"logger_path": "/var/log/osquery",

    // Set 'disable_logging' to true to prevent writing any info, warning, error
    // logs. If a logging plugin is selected it will still write query results.
    //"disable_logging": "false",

    // Splay the scheduled interval for queries.
    // This is very helpful to prevent system performance impact when scheduling
    // large numbers of queries that run a smaller or similar intervals.
    //"schedule_splay_percent": "10",

    // A filesystem path for disk-based backing storage used for events and
    // query results differentials. See also 'use_in_memory_database'.
    //"database_path": "/var/osquery/osquery.db",

    // Comma-delimited list of table names to be disabled.
    // This allows osquery to be launched without certain tables.
    //"disable_tables": "foo_bar,time",

    "utc": "true"
  },

  // Define a schedule of queries:
  "schedule": {
    // This is a simple example query that outputs basic system information.
    "system_info": {
      // The exact query to run.
      "query": "SELECT hostname, cpu_brand, physical_memory FROM system_info;",
      // The interval in seconds to run this query, not an exact interval.
      "interval": 3600
    }
  },

  // Decorators are normal queries that append data to every query.
  "decorators": {
    "load": [
      "SELECT uuid AS host_uuid FROM system_info;",
      "SELECT user AS username FROM logged_in_users ORDER BY time DESC LIMIT 1;"
    ]
  },
  "packs": {
    // "osquery-monitoring": "/usr/share/osquery/packs/osquery-monitoring.conf",
    ....
  }, 
}

osquery.conf文件大致可以分为4个部分。

  • options,配置选项,Command Line Flags基本上对所有的配置选项都进行了说明。其实osquery.flags所配置也是这个部分。这也是之前说的osquery.conf可以认为是osquery.flags的超集的原因;
  • schedule,配置SQL语句。因为osqueryd是以daemon的方式运行,所以需要通过在schedule中定义SQL语句使其定期执行返回结果;
  • decorators,中文意思是“装饰”。在decorators中也是定义了一系列的SQL语句,执行得到的结果会附加在是在执行schedule中的结果的后面;所以我们看到在decorators我们取的是uuid和登录的username
  • packs,就是一系列SQL语句的合集;

0x04 配置说明

上一节中对osquery.conf中的配置进了一个简单的说明,在本节中将详细说明。

options

  • options就是配置。Command Line Flags基本上对所有的配置选项都进行了说明。我们可以进行多种配置,有兴趣的可以自行研究。本节仅仅说明几个常用的配置;
  • config_plugin,配置选项是filesystem。如果是通过osquery.conf管理osquery就是采用filesystem,还有一种选项是tls(这一种主要是通过API的方式来配置osquery)。
  • logger_plugin,配置选项是filesystem,这也是osquery的默认值。根据Logger plugins,还可以配置tls,syslog (for POSIX,windows_event_log (for Windows),kinesis,firehose,kafka_producer
  • database_path,默认值是/var/osquery/osquery.db。因为osquery内部会使用到数据,所以配置此目录是osquery的数据库文件位置。
  • disable_logging,是配置设置osquery的结果是否需要保存到本地,这个配置其实和logger_plugin:filesystem有点重复。
  • hostIdentifier,相当于表示每个主机的标识,比如可以采用hostname作为标识。

schedule

schedule是osqeuryd用于写SQL语句的标签。其中的一个示例如下所示:

"system_info": {
    // The exact query to run.
    "query": "SELECT hostname, cpu_brand, physical_memory FROM system_info;",
    // The interval in seconds to run this query, not an exact interval.
    "interval": 3600
}

其中system_info是定义的一个SQL任务的名字,也是一个JSON格式。在其中可以进行多项设置,包括:

  1. query,定义需要执行的SQL语句;
  2. interval,定时执行的时间,示例中是3600,表示每隔3600秒执行一次;
  3. snapshot,可选选项,可以配置为snapshot:true。osquery默认执行的是增量模式,使用了snapshot则是快照模式。比如执行select * from processes;,osqeury每次产生的结果是相比上一次变化的结果;如果采用的是snapshot,则会显示所有的进程的,不会与之前的结果进行对比;
  4. removed,可选选项,默认值是true,用来设置是否记录actionremove的日志。

当然还有一些其他的不常用选项,如platformversionsharddescription等等。

更多关于schedule的介绍可以参考schedule

decorators

正如其注释Decorators are normal queries that append data to every query所说,Decorators会把他的执行结果添加到schedule中的sql语句执行结果中。所以根据其作用Decorators也不是必须存在的。。在本例中Decorators存在两条记录:

SELECT uuid AS host_uuid FROM system_info;
SELECT user AS username FROM logged_in_users ORDER BY time DESC LIMIT 1;
  1. SELECT uuid AS host_uuid FROM system_info,从system_info获取uuid作为标识符1;
  2. SELECT user AS username FROM logged_in_users ORDER BY time DESC LIMIT 1;,从logged_in_users选择user(其实查询的是用户名)的第一项作为标识符2;

当然可以在Decorators写多条语句作为标识符,但是感觉没有必要;

packs

packs就是打包的SQL语句的合集,本示例中使用的/usr/share/osquery/packs/osquery-monitoring.conf,这是官方提供的一个监控系统信息的SQL语句的集合;

{
  "queries": {
    "schedule": {
      "query": "select name, interval, executions, output_size, wall_time, (user_time/executions) as avg_user_time, (system_time/executions) as avg_system_time, average_memory, last_executed from osquery_schedule;",
      "interval": 7200,
      "removed": false,
      "blacklist": false,
      "version": "1.6.0",
      "description": "Report performance for every query within packs and the general schedule."
    },
    "events": {
      "query": "select name, publisher, type, subscriptions, events, active from osquery_events;",
      "interval": 86400,
      "removed": false,
      "blacklist": false,
      "version": "1.5.3",
      "description": "Report event publisher health and track event counters."
    },
    "osquery_info": {
      "query": "select i.*, p.resident_size, p.user_time, p.system_time, time.minutes as counter from osquery_info i, processes p, time where p.pid = i.pid;",
      "interval": 600,
      "removed": false,
      "blacklist": false,
      "version": "1.2.2",
      "description": "A heartbeat counter that reports general performance (CPU, memory) and version."
    }
  }
}

packs中的配置和schedule的配置方法并没有什么区别。我们在packs中查询到的信息包括:

  • osquery_schedule拿到osqueryd设置的schedule的配置信息;
  • osquery_events中拿到osqueryd所支持的所有的event
  • processesosquery_info中拿到进程相关的信息;

使用packs的好处是可以将一系列相同功能的SQL语句放置在同一个文件中;

0x05 运行osqueryd

当以上配置完毕之后,我们就可以通过sudo osqueryd的方式启动;如果我们设置logger_plugin:filesystem,那么日志就会落在本地/var/log/osquery下。此目录下包含了多个文件,每个文件分别记录不同的信息。

osqueryd.results.log,osqueryd的增量日志的信息都会写入到此文件中;保存结果的形式是JSON形式。示例如下:

{"name":"auditd_process_info","hostIdentifier":"localhost.localdomain","calendarTime":"Wed Oct 24 13:07:12 2018 UTC","unixTime":1540386432,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"},"columns":{"atime":"1540380461","auid":"4294967295","btime":"0","cmdline":"awk { sum += $1 }; END { print 0+sum }","ctime":"1538239175","cwd":"\"/\"","egid":"0","euid":"0","gid":"0","mode":"0100755","mtime":"1498686768","owner_gid":"0","owner_uid":"0","parent":"4086","path":"/usr/bin/gawk","pid":"4090","time":"1540386418","uid":"0","uptime":"1630"},"action":"added"}
{"name":"auditd_process_info","hostIdentifier":"localhost.localdomain","calendarTime":"Wed Oct 24 13:07:12 2018 UTC","unixTime":1540386432,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"},"columns":{"atime":"1540380461","auid":"4294967295","btime":"0","cmdline":"sleep 60","ctime":"1538240835","cwd":"\"/\"","egid":"0","euid":"0","gid":"0","mode":"0100755","mtime":"1523421302","owner_gid":"0","owner_uid":"0","parent":"741","path":"/usr/bin/sleep","pid":"4091","time":"1540386418","uid":"0","uptime":"1630"},"action":"added"}

其中的added表示的就是相当于上一次增加的进程信息;每一次执行的结果都是一条JSON记录;

squeryd.snapshots.log,记录的是osqueryd中使用snapshot:true标记的SQL语句执行结果;

{"snapshot":[{"header":"Defaults","rule_details":"!visiblepw"},{"header":"Defaults","rule_details":"always_set_home"},{"header":"Defaults","rule_details":"match_group_by_gid"},{"header":"Defaults","rule_details":"env_reset"},{"header":"Defaults","rule_details":"env_keep = \"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\""},{"header":"Defaults","rule_details":"env_keep += \"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\""},{"header":"Defaults","rule_details":"env_keep += \"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY\""},{"header":"Defaults","rule_details":"secure_path = /sbin:/bin:/usr/sbin:/usr/bin"},{"header":"root","rule_details":"ALL=(ALL) ALL"},{"header":"%wheel","rule_details":"ALL=(ALL) ALL"}],"action":"snapshot","name":"sudoers","hostIdentifier":"localhost.localdomain","calendarTime":"Tue Oct  9 11:54:00 2018 UTC","unixTime":1539086040,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"}}
{"snapshot":[{"header":"Defaults","rule_details":"!visiblepw"},{"header":"Defaults","rule_details":"always_set_home"},{"header":"Defaults","rule_details":"match_group_by_gid"},{"header":"Defaults","rule_details":"env_reset"},{"header":"Defaults","rule_details":"env_keep = \"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\""},{"header":"Defaults","rule_details":"env_keep += \"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\""},{"header":"Defaults","rule_details":"env_keep += \"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY\""},{"header":"Defaults","rule_details":"secure_path = /sbin:/bin:/usr/sbin:/usr/bin"},{"header":"root","rule_details":"ALL=(ALL) ALL"},{"header":"%wheel","rule_details":"ALL=(ALL) ALL"}],"action":"snapshot","name":"sudoers","hostIdentifier":"localhost.localdomain","calendarTime":"Tue Oct  9 11:54:30 2018 UTC","unixTime":1539086070,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"}}

由于snapshot是快照模式,所以即使两次结果相同也会全部显示出来;

osqueryd.INFO,记录osqueryd中正在运行的情况。示例如下:

Log file created at: 2018/11/22 17:06:06
Running on machine: osquery.origin
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1122 17:06:06.729902 22686 events.cpp:862] Event publisher not enabled: auditeventpublisher: Publisher disabled via configuration
I1122 17:06:06.730651 22686 events.cpp:862] Event publisher not enabled: syslog: Publisher disabled via configuration

osqueryd.WARNING,记录osquery的警告。示例如下:

Log file created at: 2018/10/09 19:53:45
Running on machine: localhost.localdomain
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E1009 19:53:45.471046 104258 events.cpp:987] Requested unknown/failed event publisher: auditeventpublisher
E1009 19:53:45.471606 104259 events.cpp:987] Requested unknown/failed event publisher: inotify
E1009 19:53:45.471634 104260 events.cpp:987] Requested unknown/failed event publisher: syslog
E1009 19:53:45.471658 104261 events.cpp:987] Requested unknown/failed event publisher: udev

osqueryd.ERROR,记录的是osquery的错误信息。示例如下:

Log file created at: 2018/10/09 19:53:45
Running on machine: localhost.localdomain
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E1009 19:53:45.471046 104258 events.cpp:987] Requested unknown/failed event publisher: auditeventpublisher
E1009 19:53:45.471606 104259 events.cpp:987] Requested unknown/failed event publisher: inotify
E1009 19:53:45.471634 104260 events.cpp:987] Requested unknown/failed event publisher: syslog
E1009 19:53:45.471658 104261 events.cpp:987] Requested unknown/failed event publisher: udev

在本例中错误信息和警告信息完全相同。在实际情况下,可能很多时候均不相同;

0x06 总结

本文主要是对osqueryd的常用配置进行了简要的说法。通过本文能够快速地利用上手osquery,由于篇幅的原因,有关osquery的很多东西没有介绍或者说明得很详细。官方的文档对osqueryd的配置已经说明得很是详尽了,如果对本文有任何的不解,可以去查阅相关的文档,也欢迎大家就相关问题与我讨论。

以上

osquery初识

0x01 说明

osquery是一个由FaceBook开源用于对系统进行查询、监控以及分析的一款软件。osquery对其的说明如下:

osquery exposes an operating system as a high-performance relational database. This allows you to write SQL-based queries to explore operating system data. With osquery, SQL tables represent abstract concepts such as running processes, loaded kernel modules, open network connections, browser plugins, hardware events or file hashes.

我们知道当你们在Linux中使用诸如pstopls -l等等命令的时候,可以发下其实他们的输出结果的格式都是很固定的很像一张表。或许是基于这样的想法,facebook开发了osquery。osquery将操作系统当作是一个高性能的关系型数据库。使用osquery运行我们能够使用类似于SQL语句的方式去查询数据库中的信息,比如正在运行的进程信息,加载的内核模块,网络连接,浏览器插件等等信息(一切查询的信息的粒度取决于osquery的实现粒度了)。

osquery也广泛地支持多个平台,包括MacOS、CentOS、Ubuntu、Windows 10以及FreeBSD,具体所支持的版本的信息也可以在osquery主页查看。除此之外,osquery的配套文档/网站也是一应俱全,包括主页Githubreadthedocsslack

本篇文章以CentOS为例说明Osquery的安装以及使用。

0x02 安装

主页上面提供了不同操作系统的安装包,我们下载CentOS对应的rpm文件即可。在本例中文件名是osquery-3.3.0-1.linux.x86_64.rpm,使用命令sudo yum install osquery-3.3.0-1.linux.x86_64.rpm安装。安装成功之后会出现:

Installed:
  osquery.x86_64 0:3.3.0-1.linux                                                                                                                                                             
Complete!

0x03 运行

osquery存在两种运行模式,分别是osqueryi(交互式模式)、osqueryd(后台进程模式)。

  • osqueryi,与osqueryd安全独立,不需要以管理员的身份运行,能够及时地查看当前操作系统的状态信息。
  • osqueryd,我们能够利用osqueryd执行定时查询记录操作系统的变化,例如在第一次执行和第二次执行之间的进程变化(增加/减少),osqueryd会将进程执行的结果保存(文件或者是直接打到kafka中)。osqueryd还会利用操作系统的API来记录文件目录的变化、硬件事件、网络行为的变化等等。osqueryd在Linux中是以系统服务的方式来运行。

为了便于演示,我们使用osqueyi来展示osquery强大的功能。我们直接在terminal中输入osqueryi即可进入到osqueryi的交互模式中(osqueryi采用的是sqlite的shell的语法,所以我们也可以使用在sqlite中的所有的内置函数)。

[user@localhost Desktop]$ osqueryi
Using a virtual database. Need help, type '.help'
osquery> .help
Welcome to the osquery shell. Please explore your OS!
You are connected to a transient 'in-memory' virtual database.

.all [TABLE]     Select all from a table
.bail ON|OFF     Stop after hitting an error
.echo ON|OFF     Turn command echo on or off
.exit            Exit this program
.features        List osquery's features and their statuses
.headers ON|OFF  Turn display of headers on or off
.help            Show this message
.mode MODE       Set output mode where MODE is one of:
                   csv      Comma-separated values
                   column   Left-aligned columns see .width
                   line     One value per line
                   list     Values delimited by .separator string
                   pretty   Pretty printed SQL results (default)
.nullvalue STR   Use STRING in place of NULL values
.print STR...    Print literal STRING
.quit            Exit this program
.schema [TABLE]  Show the CREATE statements
.separator STR   Change separator used by output mode
.socket          Show the osquery extensions socket path
.show            Show the current values for various settings
.summary         Alias for the show meta command
.tables [TABLE]  List names of tables
.width [NUM1]+   Set column widths for "column" mode
.timer ON|OFF      Turn the CPU timer measurement on or off

通过.help,我们能够查看在osqueryi模式下的一些基本操作。比如.exit表示退出osqueryi,.mode切换osqueryi的输出结果,.show展示目前osqueryi的配置信息,.tables展示在当前的操作系统中能够支持的所有的表名。.schema [TABLE]显示具体的表的结构信息。

osquery> .show
osquery - being built, with love, at Facebook

osquery 3.3.0
using SQLite 3.19.3

General settings:
     Flagfile: 
       Config: filesystem (/etc/osquery/osquery.conf)
       Logger: filesystem (/var/log/osquery/)
  Distributed: tls
     Database: ephemeral
   Extensions: core
       Socket: /home/xingjun/.osquery/shell.em

Shell settings:
         echo: off
      headers: on
         mode: pretty
    nullvalue: ""
       output: stdout
    separator: "|"
        width: 

Non-default flags/options:
  database_path: /home/xingjun/.osquery/shell.db
  disable_database: true
  disable_events: true
  disable_logging: true
  disable_watchdog: true
  extensions_socket: /home/xingjun/.osquery/shell.em
  hash_delay: 0
  logtostderr: true
  stderrthreshold: 3

可以看到设置包括常规设置(General settings)、shell设置(Shell settings)、非默认选项(Non-default flags/options)。在常规设置中主要是显示了各种配置文件的位置(配置文件/存储日志文件的路径)。 在shell设置中包括了是否需要表头信息(headers),显示方式(mode: pretty),分隔符(separator: "|")。

.table可以查看在当前操作系统中所支持的所有的表,虽然在schema中列出了所有的表(包括了win平台,MacOS平台,Linux平台)。但是具体到某一个平台上面是不会包含其他平台上的表。下方显示的就是我在CentOS7下显示的表。

osquery> .table
  => acpi_tables
  => apt_sources
  => arp_cache
  => augeas
  => authorized_keys
  => block_devices
  => carbon_black_info
  => carves
  => chrome_extensions
  => cpu_time
  => cpuid
  => crontab
...

.schema [TABLE]可以用于查看具体的表的结构信息。如下所示:

osquery> .schema users
CREATE TABLE users(`uid` BIGINT, `gid` BIGINT, `uid_signed` BIGINT, `gid_signed` BIGINT, `username` TEXT, `description` TEXT, `directory` TEXT, `shell` TEXT, `uuid` TEXT, `type` TEXT HIDDEN, PRIMARY KEY (`uid`, `username`)) WITHOUT ROWID;
osquery> .schema processes
CREATE TABLE processes(`pid` BIGINT, `name` TEXT, `path` TEXT, `cmdline` TEXT, `state` TEXT, `cwd` TEXT, `root` TEXT, `uid` BIGINT, `gid` BIGINT, `euid` BIGINT, `egid` BIGINT, `suid` BIGINT, `sgid` BIGINT, `on_disk` INTEGER, `wired_size` BIGINT, `resident_size` BIGINT, `total_size` BIGINT, `user_time` BIGINT, `system_time` BIGINT, `disk_bytes_read` BIGINT, `disk_bytes_written` BIGINT, `start_time` BIGINT, `parent` BIGINT, `pgroup` BIGINT, `threads` INTEGER, `nice` INTEGER, `is_elevated_token` INTEGER HIDDEN, `upid` BIGINT HIDDEN, `uppid` BIGINT HIDDEN, `cpu_type` INTEGER HIDDEN, `cpu_subtype` INTEGER HIDDEN, `phys_footprint` BIGINT HIDDEN, PRIMARY KEY (`pid`)) WITHOUT ROWID;

上面通过.schema查看usersprocesses表的信息,结果输出的是他们对应的DDL。

0x03 基本使用

在本章节中,将会演示使用osqueryi来实时查询操作系统中的信息(为了方便展示查询结果使用的是.mode line模式)。

查看系统信息

osquery> select * from system_info;
          hostname = localhost
              uuid = 4ee0ad05-c2b2-47ce-aea1-c307e421fa88
          cpu_type = x86_64
       cpu_subtype = 158
         cpu_brand = Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz
cpu_physical_cores = 1
 cpu_logical_cores = 1
     cpu_microcode = 0x84
   physical_memory = 2924228608
   hardware_vendor = 
    hardware_model = 
  hardware_version = 
   hardware_serial = 
     computer_name = localhost.localdomain
    local_hostname = localhost

查询的结果包括了CPU的型号,核数,内存大小,计算机名称等等;

查看OS版本

osquery> select * from os_version;
         name = CentOS Linux
      version = CentOS Linux release 7.4.1708 (Core)
        major = 7
        minor = 4
        patch = 1708
        build = 
     platform = rhel
platform_like = rhel
     codename =

以看到我的本机的操作系统的版本是CentOS Linux release 7.4.1708 (Core)

查看内核信息版本

osquery> SELECT * FROM kernel_info;
  version = 3.10.0-693.el7.x86_64
arguments = ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
     path = /vmlinuz-3.10.0-693.el7.x86_64
   device = /dev/mapper/centos-root

osquery> SELECT * FROM kernel_modules LIMIT 3;
   name = tcp_lp
   size = 12663
used_by = -
 status = Live
address = 0xffffffffc06cf000

   name = fuse
   size = 91874
used_by = -
 status = Live
address = 0xffffffffc06ae000

   name = xt_CHECKSUM
   size = 12549
used_by = -
 status = Live
address = 0xffffffffc06a9000

查询repo和pkg信息

osquery提供查询系统中的repo和okg相关信息的表。在Ubuntu中对应的是apt相关的包信息,在Centos中对应的是yum相关的包信息。本例均以yum包为例进行说明

osquery> SELECT * FROM yum_sources  limit 2;
    name = CentOS-$releasever - Base
 baseurl = 
 enabled = 
gpgcheck = 1
  gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

    name = CentOS-$releasever - Updates
 baseurl = 
 enabled = 
gpgcheck = 1
  gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

我们可以直接利用yum_sources来查看操作系统的yum源相关的信息。

osquery> SELECT name, version FROM rpm_packages order by name limit 3;
   name = GConf2
version = 3.2.6

   name = GeoIP
version = 1.5.0

   name = ModemManager
version = 1.6.0

利用rpm_packages查看系统中已经安装的rpm包信息。我们也可以通过name对我们需要查询的包进行过滤,如下:

osquery> SELECT name, version FROM rpm_packages where name="osquery";
   name = osquery
version = 3.3.0

挂载信息

我们可以使用mounts表来查询系统中的具体的驱动信息。例如我们可以如下的SQL语句进行查询:

SELECT * FROM mounts;
SELECT device, path, type, inodes_free, flags FROM mounts;

我们也可以使用where语句查询摸一个具体的驱动信息,例如ext4或者是tmpfs信息。如下:

osquery> SELECT device, path, type, inodes_free, flags FROM mounts WHERE type="ext4";
osquery> SELECT device, path, type, inodes_free, flags FROM mounts WHERE type="tmpfs";
     device = tmpfs
       path = /dev/shm
       type = tmpfs
inodes_free = 356960
      flags = rw,seclabel,nosuid,nodev

     device = tmpfs
       path = /run
       type = tmpfs
inodes_free = 356386
      flags = rw,seclabel,nosuid,nodev,mode=755

     device = tmpfs
       path = /sys/fs/cgroup
       type = tmpfs
inodes_free = 356945
      flags = ro,seclabel,nosuid,nodev,noexec,mode=755

     device = tmpfs
       path = /run/user/42
       type = tmpfs
inodes_free = 356955
      flags = rw,seclabel,nosuid,nodev,relatime,size=285572k,mode=700,uid=42,gid=42

     device = tmpfs
       path = /run/user/1000
       type = tmpfs
inodes_free = 356939
      flags = rw,seclabel,nosuid,nodev,relatime,size=285572k,mode=700,uid=1000,gid=1000

内存信息

使用memory_info查看内存信息,如下:

osquery> select * from memory_info;
memory_total = 2924228608
 memory_free = 996024320
     buffers = 4280320
      cached = 899137536
 swap_cached = 0
      active = 985657344
    inactive = 629919744
  swap_total = 2684350464
   swap_free = 2684350464

网卡信息

使用interface_addresses查看网卡信息,如下:

osquery> SELECT * FROM interface_addresses;
     interface = lo
       address = 127.0.0.1
          mask = 255.0.0.0
     broadcast = 
point_to_point = 127.0.0.1
          type = 

     interface = virbr0
       address = 192.168.122.1
          mask = 255.255.255.0
     broadcast = 192.168.122.255
point_to_point = 
          type = 

     interface = lo
       address = ::1
          mask = ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
     broadcast = 
point_to_point = 
          type =

还可以使用interface_details查看更加具体的网卡信息。

SELECT * FROM interface_details;
SELECT interface, mac, ipackets, opackets, ibytes, obytes FROM interface_details;

查询结果如下

osquery> SELECT * FROM interface_details;
  interface = lo
        mac = 00:00:00:00:00:00
       type = 4
        mtu = 65536
     metric = 0
      flags = 65609
   ipackets = 688
   opackets = 688
     ibytes = 59792
     obytes = 59792
    ierrors = 0
    oerrors = 0
     idrops = 0
     odrops = 0
 collisions = 0
last_change = -1
 link_speed = 
   pci_slot = 
    ....

系统启动时间

osquery> select * from uptime;
         days = 0
        hours = 2
      minutes = 23
      seconds = 51
total_seconds = 8631

查询用户信息

osquery提供了多个表用于查询用户的信息,包括使用users表检索系统中所有的用户,使用last表查看用户上次登录的信息,使用logged_in_user查询具有活动shell的用户信息。

使用select * from users查看所有用户信息,使用类似于uid>1000的方式过滤用户。

osquery> select * from users where uid>1000;
        uid = 65534
        gid = 65534
 uid_signed = 65534
 gid_signed = 65534
   username = nfsnobody
description = Anonymous NFS User
  directory = /var/lib/nfs
      shell = /sbin/nologin
       uuid =

我们可以使用last表查询最终的登录信息,如SELECT * FROM last;。对于普通用户来说,其type值为7。那么我们的查询条件如下:

osquery> SELECT * FROM last where type=7;
username = user
     tty = :0
     pid = 12776
    type = 7
    time = 1539882439
    host = :0

username = user
     tty = pts/0
     pid = 13754
    type = 7
    time = 1539882466
    host = :0

其中的time是时间戳类型,转换为具体的日期之后就可以看到具体的登录时间了。

使用SELECT * FROM logged_in_users;查看当前已经登录的用户信息。

防火墙信息

我们可以使用iptables来查看具体的防火墙信息,如select * from iptables;,也可以进行过滤查询具体的防火墙信息。如SELECT chain, policy, src_ip, dst_ip FROM iptables WHERE chain="POSTROUTING" order by src_ip;

进程信息

我们可以使用processes来查询系统上进程的信息,包括pid,name,path,command等等。
可以使用select * from processes;或者查看具体的某几项信息,select pid,name,path,cmdline from processes;

osquery> select pid,name,path,cmdline from processes limit 2;
    pid = 1
   name = systemd
   path = 
cmdline = /usr/lib/systemd/systemd --switched-root --system --deserialize 21

    pid = 10
   name = watchdog/0
   path = 
cmdline =

检查计划任务

我们可以使用crontab来检查系统中的计划任务。

osquery> select * from crontab;
       event = 
      minute = 01
        hour = *
day_of_month = *
       month = *
 day_of_week = *
     command = root run-parts /etc/cron.hourly
        path = /etc/cron.d/0hourly

       event = 
      minute = 0
        hour = 1
day_of_month = *
       month = *
 day_of_week = Sun
     command = root /usr/sbin/raid-check
        path = /etc/cron.d/raid-check

其他

在Linux中还存在其他很多的表能够帮助我们更好地进行入侵检测相关的工作,包括process_eventssocket_eventsprocess_open_sockets等等,这些表可供我们进行入侵检测的确认工作。至于这些表的工作原理,有待阅读osquery的源代码进行进一步分析。

0x04 总结

本文主要是对Osquery的基础功能进行了介绍。Oquery的强大功能需要进一步地挖掘和发现。总体来说,Osquery将操作系统中的信息抽象成为一张张表,对于进行基线检查,系统监控是一个非常优雅的方式。当然由于Osquery在这方面的优势,也可以考虑将其作为HIDS的客户端,但是如果HIDS仅仅只有Osquery也显然是不够的。

以上

碎碎念之Afl-fuzz Docker实践

开篇

好久没有更新东西了,写一篇水文压压惊吧。这篇文章主要记录一点之前鼓捣Afl-fuzz的笔记,涉及的知识不深,看官们别见笑。作为程序猿周围的安全工程师,我们负责着产品线的测试。这有靠经验来做的黑盒测试,也有靠自己反编译安装包,鼓捣设备得到部会源码的进行灰盒测试,也开始引进Fuzz模糊测试-这就靠下面要说的Afl-fuzz来实现。因为众所周知的问题(搞安全的人在公司地位低),我们没有产品的代码级权限,我们用Afl-fuzz也就当黑盒一样的在玩,下面会解释到。开始吧。。少侠们。。。

初见Afl-fuzz

了解Afl-fuzz之前我们先了解一下什么是模糊测试,定义:模糊测试是一种通过自动或半自动的生成随机数据输入到一个程序中,并监视程序异常,以发现可能的安全漏洞的技术。Fuzzing技术是当前鉴别软件安全问题方面最强大测试技术, American Fuzzy Lop (简称:Afl-fuzz)是一个优秀的模糊测试工具。使用Afl-fuzz 在有源码条件下可以使用afl-gcc 或者 afl-clang-fast 编译源码,afl在编译的时候完成插桩,如果无源码条件下可以使用qemu模式进行插桩. 对于Fuzzing网络程序,可以使用persistent模式进行测试,我鼓捣的就是使用persistent模式进行Fuzzing。

服务器环境:Ubuntu14.04 4核心 8G内存
我的目的:我们用Afl-fuzz的目的很简单,实现快速的对多个样本同时进行Fuzzing。

关于Afl-fuzz的基础用法,有很多前辈写了,可以搜索一波自行学习,这里使用的是while (__AFL_LOOP(1000))写法即persistent模式,使用afl-clang-fast++编译c写的sender, 用于从文件读取Afl变异产生的payload,发起HTTP请求包,简单的可以参考@小葵师傅的https://www.jianshu.com/p/82c361c7d439

简单来说,使用persistent模式运行Afl-fuzz,需要准备这几个东西:

  1. seed 输入样本
  2. out 指定输出目录
  3. sender afl-clang-fast++编译的可执行程序
  4. case.txt 完整的请求包

过程就是: Afl-fuzz工具执行sender, 读取case.txt的基础请求包,使用基于seed进行变形产生的数据替换请求包中标记的位置,这样构造的新的请求包,再发给服务器。

我弄了一个shell脚本,方便我快速编译和运行Afl-fuzz对样本进行Fuzzing.脚本如下:

AFL-分布式模式尝试之shell脚本:

riversec@box:~/test/test/test_shell$ cat run_aflfuzz 
#./run_aflfuzz fuzz_cross_site_uri 5
cd $1
afl-clang-fast++ sender.c -o sender
screen -AmdS MasterFuzz_$1 bash
screen -S MasterFuzz_$1 -X screen afl-fuzz -M master -i seed/ -o out/ ./sender request.txt
for (( i=3; i<$(($2+3)); i++ ))
  do
   screen -AmdS ClusterFuzz$(($i-2))$1 bash
   screen -S ClusterFuzz$(($i-2))$1 -X screen afl-fuzz -S Cluster$(($i-2))$1 -i seed/ -o out/ ./sender request.txt
 done
screen -ls
echo "start over..."

上面的shell实现了afl-clang-fast++编译sender,用了screen管理挂起任务,使用AFL的 Master/Cluster模式进行分布式Fuzzing. 调用命令如下:

./run_aflfuzz fuzz_cross_site_uri 5
  • fuzz_cross_site_uri 是screen会话TAG,方便区分节点
  • request.txt 是burp抓的完整的HTTP请求包
  • seed 是输入,即从HTTP请包中,需要模糊测试的点,提取的值作为变异种子
  • out是输出,AFL的状态信息,输出文件保留在此
  • 数字5 是创建5个节点

虽然上面只对同一个样本进行Fuzzing, 但是实现了分布式Fuzzing,哪么是不是我只需要多执行几次run_aflfuzz命令,再对其他对样本进行测试,就能实现我想要的目的呢?

结果动手了才会发现,事情并不是那么简单。。。

遇到了什么问题?

发现问题1:CPU 核心限制了同时Fuzzing的样本数

因为平时工作时需要测试各种功能,就会有很多种类型的请求包需要进行Fuzzing,我们在同一台机器上想跑多个样本时,总会有遇到如下的提示:

201808231535036581290334.png

使用过Afl-fuzz的同学可能会很熟悉,这都是穷惹的祸啊。。4核心难道就只能跑4个样本了吗??

发现问题2:手动抓样本,制作seed,标记Fuzzing位置

目前条件下,每当我要Fuzzing一个样本,我需要重复做几件事:

  1. burp抓取HTTP包
  2. 使用$替换要Fuzzing的位置,比如tid=888,标记为tid=$
  3. 将888作为seed
  4. 使用afl-fuzz命令开启Fuzzing.

每一个样本都需要做这些操作,虽然没有几步,但是让人很不耐烦。。

一步一步实现

遇到上面的问题后,就开始思索怎么解决。首先解决第一个CPU核心数限制问题,我这里首先想到使用docker虚拟化技术来做,经过验证后发现是可行的。

下面具体说说如何实现。

1.安装docker

第一步安装docker,安装过程这里就不说了。

2.搜索images

搜索仓库里是否有前辈做好的afl的images,直接拿来使用。

docker search afl

搜出来的第一个,就可以直接拿来用,我用的就是这个。如图所示:

201808241535091650456788.png

pull下来做一些自己的定制化后,再docker commint 创建一个自己的新images.

3.创建container

创建容器就很讲究了,我们想要的是用docker来跑多个Fuzzing任务,我们不仅要让他跑起来,还得为后面查看其状态,了解任务信息做准备。Afl-fuzz运行起来后,会将状态信息实时更新在out目录下的文件里。

为了了解任务状态,宿主机就需要能够实时访问到这些记录任务状态的文件,我们就需要通过文件共享来,访问宿主机查看这些文件,所以创建命令如下:

docker run -td --privileged -v /var/docker_afl/share/case1:/case --name "fuzz_node1" -h "fuzz_node1" moflow/afl-tools /bin/bash

宿主机的/var/docker_afl/share/目录下创建case1文件,对应容器的/case目录如下截图:

201808241535093063259488.png

上图的case.txt,是一个准备好之后,创建的节点,里面包括了如下:case.txt文件:完整的HTTP包,out是输出目录,seed是输入目录,sender是用afl-clang-fast++编译的persistent模式写的发包程序,tool.py是写的辅助自动程序后面会说到。

4.准备数据

准备Afl-fuzz程序运行需要的数据,先Burp抓一请求包如下:

POST /ajax/login/backend/?b0QHe3bMd9fNm57=$$$K17GUKsYaGNxIHiFehHlOhNfyy_B9944oJJ8DW_2tsQgytxlxiHVKrGP362pXExTBoA0VwdqYDkheIat1EeiQymXPjk6ZNRPTkjyIo2W63tdF$$$ HTTP/1.1
Host: 10.10.66.132
Content-Length: 25
Accept: */*
Origin: http://10.10.66.132
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Referer: http://10.10.66.132/ajax/login/
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7
Cookie: csrftoken=fVYWMCFUG6gW4ONjm48179CsKAiamipA; send_log_path=%2Ftmp%2Flog%2Fnew%2Faccess.log.1; FSSBBIl2UgzbN7NCSRF=qDWzcpd3KP1mAw5r0gbsg06pu4TSyuYU; b0QHe3bMd9fNm57=K17GUKsYaGNxIHiFehHlOhNfyy_B9944oJJ8DW_2tsQgytxlxiHVKrGP362pXExTBoA0VwdqYDkheIat1EeiQymXPjk6ZNRPTkjyIo2W63tdF
Connection: close
username=1e&password=1212

这个请求包就是我们需要进行Fuzzing的case1,上面里我使用了$$$来标记需要Fuzz的点(和Sqlmap的引导注入相似)。标记后,其中的原始的参数:

K17GUKsYaGNxIHiFehHlOhNfyy_B9944oJJ8DW_2tsQgytxlxiHVKrGP362pXExTBoA0VwdqYDkheIat1EeiQymXPjk6ZNRPTkjyIo2W63tdF

应当作为seed做样本变异使用,Afl-fuzz工具根据其内部的变异算法产生新的值,然后替换这个,进行发包测试。编写一个辅助脚本tool.py帮助完成,提取标记的值,保存seed的工作,脚本实现如下:

import re
import sys
import os

def alter(inf, ouf):
    with open(inf, "r") as f1, open("%s.bak" % inf, "w") as f2:
        for line in f1:
            flg = re.findall('\$\$\$.+\$\$\$',line)
            if len(flg)>0 and len(flg[0])>1:
                flag = flg[0]
                print "[*] Found:\n\t", flag
                print "[*] seed: ", os.getcwd() + os.sep + "seed"
                with open(ouf, "w") as of:
                    of.write(flag.replace("$$$", ""))
                f2.write(line.replace(flg[0],'$'))
            else:
                f2.write(line)
    os.rename("%s.bak" % inf, inf)
print "[*] run checking...."
alter(sys.argv[1], sys.argv[2])
print "[*] run over...."

上面准备好后,编写sender.c,这个程序实现几个操作 1. 加载case.txt内容 2. 从输入流读取数据替换$case.txt标记的位置即:将Afl产生的变异数据结合成新的请求包,然后使用socket发送给Server.

部分代码如下:

while (__AFL_LOOP(1000)) { 
        memset(copy_file, 0, sizeof(copy_file));
               memcpy(copy_file,request.data(),numread);
                per_req.assign( copy_file );
            memset(target, 0, sizeof(target));
        read( STDIN_FILENO, target, 10240 );
        TrimSpace(target);
        per_req.replace(per_req.find("$"),1,target); 

               // printf( "%s\r\n", per_req.data() );
                
        sockfd                = socket( AF_INET, SOCK_STREAM, 0 );
        dest_addr.sin_family        = AF_INET;
        dest_addr.sin_port        = htons( destport );
        dest_addr.sin_addr.s_addr    = inet_addr( "10.10.66.138" );
        memset( &dest_addr.sin_zero, 0, 8 );


        connect( sockfd, (struct sockaddr *) &dest_addr, sizeof(struct sockaddr) );
        send( sockfd, per_req.data(), strlen( per_req.data() ), 0 );
        //recv( sockfd, buffer, 1024, 0 );
        //printf( "%s", buffer );
        close( sockfd );
    }

地址10.10.66.138就是被Fuzz的服务器地址。

5.初始化Afl-fuzz环境

上面的准备工作做好后,我们现在就可以在宿主机上完成容器里的Afl-fuzz程序运行的准备了。

  • 拷贝case,sender到,tool.py容器里
cp /var/docker_afl/cases/request.txt /var/docker_afl/share/node1/case.txt
cp /var/docker_afl/share/sender.c /var/docker_afl/share/node1/sender.c
cp /var/docker_afl/share/tool.py /var/docker_afl/share/node1/tool.py

这里的node1文件夹对应容器node1的/case目录。将必要的文件拷贝进去。

手动启用节点:(首次默认启用)

docker start fuzz_node1
  • 处理case为sender能处理的格式

在docker里执行命令完成创建环境,执行tool.py辅助脚本,完成提取$$$标记的值做为seed输入值. 创建seed目录

docker exec -it fuzz_node2 bash -c 'python /case/tool.py /case/case.txt /case/1'
docker exec -it fuzz_node2 bash -c 'mkdir /case/seed'
docker exec -it fuzz_node2 bash -c 'mv /case/1 /case/seed/1'

6.开始Fuzzing

现在准备好了环境后我们可以进入到容器开始跑Fuzzing了,有两种方式运行,第一种是进入到容器里,另外一种就是通过bash -c执行命令,让其内部运行起来。如:

docker exec -it $1 bash -c 'afl-fuzz -M master -i /case/seed/ -o /case/out/ /case/sender /case/case.txt'

实践中发现Afl-fuzz的需要保持在终端打开下,上面的方式都不是很妥当,这还得借助screen来实现。实践证明是可行的,命令如下:

screen -AmdS node1 bash
screen -S node1 -X screen docker exec -it fuzz_node1 bash -c 'afl-fuzz -M master -i /case/seed/ -o /case/out/ /case/sender /case/case.txt'

这两条命令执行以后,背后的Afl-fuzz就开始在工作了,可以使用screen -r node1切换进去,就可以看到亲切的Afl-fuzz的界面了。

201808241535098560494608.png

7.写一个shell吧

偷个懒,写一个shell吧,取名就叫:create.sh,内容如下:

docker run -td --privileged -v /var/docker_afl/share/$1:/case --name "$1" -h "$1" komi/afl-fuzz-v2.0 /bin/bash

cp /var/docker_afl/cases/request.txt /var/docker_afl/share/$1/case.txt
cp /var/docker_afl/share/sender.c /var/docker_afl/share/$1/sender.c
cp /var/docker_afl/share/tool.py /var/docker_afl/share/$1/tool.py

docker exec -it $1 bash -c 'python /case/tool.py /case/case.txt /case/1'
docker exec -it $1 bash -c 'mkdir /case/seed'
docker exec -it $1 bash -c 'mv /case/1 /case/seed/1'
docker exec -it $1 bash -c 'afl-clang-fast++ /case/sender.c -o /case/sender'

screen -AmdS $1 bash

效果如下:

201808241535098509748627.png

到这里,一个请求包的FUZZ工作就已经开始了,这是利用AFL生成大量的样本,发起网络请求的黑盒的FUZZ。

8.查看状态

如何查看FUZZ任务状态?文件夹/var/docker_afl/share下的node*, 表示一个独立的Fuzz任务。比如下面存在node0、node1、node2、node3、node9任务。

201808241535101012719458.png

AFL的FUZZ任务状态可以通过查看文件而知,

比如:

我们想要查看刚创建的node1任务的进度信息,需要切换到/var/docker_afl/share/node1/out/master路径,打开fuzzer_stats文件如下:

我们关注测试的速度,1834.11次/s,这表示基于标记的

201808241535101044432478.png

其他:

  1. 查看当前任务的seed

    cat /var/docker_afl/share/node1/seed/1

  2. 查看当前任务的请求case

    cat /var/docker_afl/share/node1/case.txt

  3. 查看AFL运行信息

    screen -r node1

201808241535101415242588.png

  1. 查看138的网络请求

命令:sudo tcpdump -s 1024 -l -A -n -i eth0 src 10.10.66.131

可以确认AFL正在工作...

201808241535101448357785.png

后记

暂无

参考