Friday, March 31, 2017

NBDServer

I was reading a blog written a while back about using NBDServer to image memory across the network. This version of NBDServer is a DFIR/forensic take by Jeff Bryner (@0x73ff).

Ken Pryor came across an issue where if the memory was over 4GB, the machine would lock up and blue screen. Doing a quick google search lead me to this email thread. In the email, Michael Cohen attributes this issue to NBDServer not "implementing the correct algorithm for skipping the reserved memory regions." With this knowledge in hand, I set out to solve this issue.

So what's going on? The first thing we need to do is load the winpmem driver:

winpmem-2.1.post4.exe -l (lowercase L)

Once the driver is loaded, we need to start NBDServer with the debugging option to try and narrow down what is happening:

nbdserver.exe -c <your-IP> -f \\.\pmem -d

We then need to start the client on the linux box:

modprobe nbd
nbd-client <your-IP> 60000 /dev/nbd0

Back on the Windows box, the output from NBDServer looks something like this:

[*] Opening memory...delay file open until socket init.
[*] Listening...
[*] Init socket loop
[*] Init socket loop
[*] Init socket loop
[*] opening memory
[*] CR3: 0x0000187000 6 memory ranges:
[*] Start 0x00001000 - Length 0x0009C000
[*] Start 0x00100000 - Length 0xC8670000
[*] Start 0xC8777000 - Length 0x00442000
[*] Start 0xC8F5E000 - Length 0x127C1000
[*] Start 0xDBFFF000 - Length 0x00001000
[*] Start 0x100000000 - Length 0x21EE00000
[*] Negotiating...sending NBDMAGIC header
[*] Started!
[*] Request: read From: 0 Len: 1024
[*] Sending pad: 0,1024
[*] Request: read From: 1024 Len: 1024
[*] Sending pad: 1024,1024
[*] Request: read From: 2048 Len: 1024
[*] Sending pad: 2048,1024
[*] Request: read From: 3072 Len: 1024
[*] Sending pad: 3072,1024
[*] Request: read From: 4096 Len: 1024
[*] Sending mem: 4096,1024
[*] Request: read From: 5120 Len: 1024
[*] Sending mem: 5120,1024
[*] Request: read From: 6144 Len: 1024
[*] Sending mem: 6144,1024
[*] Request: read From: 7168 Len: 1024
[*] Sending mem: 7168,1024
[*] Request: read From: 12288 Len: 1024
[*] Sending mem: 12288,1024
[*] Request: read From: 13312 Len: 1024
[*] Sending mem: 13312,1024
[*] Request: read From: 14336 Len: 1024
[*] Sending mem: 14336,1024
[*] Request: read From: 15360 Len: 1024
[*] Sending mem: 15360,1024

From the output above, we can see that there are 6 memory ranges along with their starting point and length. If we convert the hex from the start and length to decimal, we can see the first starting point is 4096 with a length of 638,976. Looking at this the everything from 0 up to 4096 should not be read and padded with 0's to keep our offsets correct. Everything from 4096 up to 643,072 should be read. Everything seems to be in order. We can see padding being sent up to 4096 and then NBDServer starts to read the memory. Lets dig deeper to find out what is going on.

Again, converting the second memory range show us it starts at 1,048,576 and ends at 3,363,241,984. So this means that everything from 643,072 up to 1,048,576 should be padded. Because we started NBDServer in debug mode we can check this.

[*] Sending mem: 641024,1024
[*] Sending mem: 642048,1024
[*] Sending mem: 643072,1024
[*] Sending mem: 644096,1024
[*] Sending mem: 645120,1024
[*] Sending mem: 646144,1024
[*] Request: read From: 647168 Len: 131072
[*] Sending mem: 647168,1024
[*] Sending mem: 648192,1024

Bingo! No padding. This explains why Windows would be locking up and blue screening. We are reading from the reserved memory regions. Nice thing we have the source code. Time to find out what is happening. We'll look at the part of the code that is doing the padding:

//are we padding or reading memory based on our 'position' in the memory 'file'
                          if (bMemory){
     for(i=0; i<info->number_of_runs; i++) {
         if ( (info->runs[i].start <= cur_offset.QuadPart) && (nb<=info->runs[i].length)) {
             bPad=false; //really read the mem driver
             //debugLog(sformat("no pad for : %lld, %d ",cur_offset.QuadPart,nb));
       }
    }
                 }

The problem lies in this statement:

if ( (info->runs[i].start <= cur_offset.QuadPart) && (nb<=info->runs[i].length))

Lets add some debugging to see what these values are:

[*] start: 4096 offset: 2753177115951104 nb: 0 length: 1024
[*] Sending mem: 641024,1024
[*] start: 4096 offset: 2757575162462208 nb: 0 length: 1024
[*] Sending mem: 642048,1024
[*] start: 4096 offset: 2761973208973312 nb: 0 length: 1024
[*] Sending mem: 643072,1024
[*] start: 4096 offset: 2766371255484416 nb: 0 length: 1024
[*] Sending mem: 644096,1024
[*] start: 4096 offset: 2770769301995520 nb: 0 length: 1024
[*] Sending mem: 645120,1024
[*] start: 4096 offset: 2775167348506624 nb: 0 length: 1024
[*] Sending mem: 646144,1024
[*] Request: read From: 647168 Len: 131072
[*] start: 4096 offset: 2779565395017728 nb: 0 length: 1024
[*] Sending mem: 647168,1024
[*] start: 4096 offset: 2783963441528832 nb: 0 length: 1024
[*] Sending mem: 648192,1024

After 643072 the memory should be padded. Looking at this we can see that the test to read is always true by these values. This is defiantly the problem. So lets change the statement so it works correctly.

                               if (bMemory){
                    for(i=0; i<info->number_of_runs; i++) {
                        if ((info->runs[i].start <= cur_offset.QuadPart) && (cur_offset.QuadPart <= info->runs[i].start + info->runs[i].length)) {
                            bPad=false;  //really read the mem driver
                            //debugLog(sformat("no pad for : %lld, %d ",cur_offset.QuadPart,nb));
                        }
                    }
                                }

Lets run this again with the updated if statement and see what happens:

[*] Sending mem: 641024,1024
[*] Sending mem: 642048,1024
[*] Sending mem: 643072,1024
[*] Sending pad: 644096,1024
[*] Sending pad: 645120,1024
[*] Sending pad: 646144,1024
[*] Request: read From: 647168 Len: 131072
[*] Sending pad: 647168,1024
[*] Sending pad: 648192,1024

Nice! Everything after 643072 is getting padded like it should. After this small change, I was able to dd 12GB of memory without blue screening the computer. I was also able to use Volatility against the network block device. Volatility ran rather fast except for imageinfo (takes a really long time). It's not F-Response, but it is a cheap alternative in a pinch. Updated version of NBDServer can be found here until the pull request is accepted. Enjoy!