• Hey Guest! Ever feel like entering a Game Jam, but the time limit is always too much pressure? We get it... You lead a hectic life and dedicating 3 whole days to make a game just doesn't work for you! So, why not enter the GMC SLOW JAM? Take your time! Kick back and make your game over 4 months! Interested? Then just click here!

Nonsensical IP and port values in async_load[]

PNelly

Member
I've landed a real fun one guys.

A little context in the first few paragraphs then the spooky stuff.

I have a fairly complex networking project going that's got a meet-up server which facilitates udp hole punching, a reliable udp implementation, all kinds of neat stuff. Recently I made some changes to how the meet-up server and meet-up clients talk to each other to reduce bandwidth consumption by that part of the application. I started encountering a very strange fault that I haven't been able to track down, and I'm beginning to wonder if perhaps packets are being corrupted in transit, or something else is going wrong outside of the GML. I'll explain why.

The problem manifests itself as a crash caused by trying to access a map that doesn't exist, or a non-existent key within a map that does exist, with a low reproducibility of about 5%. Not the best, but should still be straightforward right? Capture the packet metadata in the headers I created (a message id and some other parameters), and use that information to pinpoint where I'm writing or reading data incorrectly.

My message ids are declared as two sets of enums (one for tcp and one for udp), that span the ranges 0-18 and 1000-1010. Any time a packet is to be sent, the buffer containing the data is passed to a script that fills in all the header information, with the message id enum as an argument. Any time a packet is received, the header information is consumed and the appropriate action taken with the data. It's been a pretty rock solid system so far.

Skipping some of the intermediate detective work, I discovered an invalid message id is being passed around somehow as a part of the problem, with a value of 44000 or some such nonsense causing a bad map access. Since that doesn't tell me where to go looking for the fault my next step was to capture the ip and port associated with the Network Async event itself to try and gather more clues. I figured it'd at least point me towards which program instances were talking to each other, and what state they were in when things went wrong.

Now it gets weird. I was able to capture some of these packets this morning and got the following for the ip and port receiving the Network Async event. Which if you recall, come from async_load[] and not from reading the buffer associated with the data event:

  • ip:148.245.24.0
  • port: 0

I've been testing by running multiple program instances on the same machine, so they're all using the ip 127.0.0.1. The ip 148.245.24.0 is not an address on my local network (logged into my router to double check) and my internet connection was disabled at the time. The port value of 0 of course makes no sense at all to begin with, and additionally the meet-up server uses a port in the 4000 range and the client sockets are all placed in the ephemeral port range (49152 - 65535). Simply bizarre.

Of course there's probably something wrong with my code or system design that contributes to (or outright causes) the problem, but those weird values give me some doubts. It stands to reason that if I'm writing or reading buffer data with the wrong format or typing that I could reproduce the crash very consistently, rather than once in a blue moon with everything (appearing to be) working flawlessly the rest of the time. Further, the funky ip and port values in async_load[] I think would have to come from somewhere under the hood, and not the GML itself.

A few points of information that might be relevant:

  • When the crash does appear it happens when an established udp session is broken up. That process entails the (ex) udp host sending udp data to tell the clients to pack up, the (ex) udp clients then close their udp sockets, and open tcp sockets and connect back to the meet-up server, which leads to more information being exchanged. Could all of those machinations contribute to data being interpreted incorrectly?
  • Like I said in the previous bullet, there's tcp and udp stuff often happening at the same time. Is there some nuance about the Network Async event perhaps treating them differently that could be contributing?
  • Each time I've seen the crash happen the invalid message id is the same. Additionally, the very first item in the buffer header is supposed to be a boolean value, but reading it as a u8 shows it contains 173. That evaluates to true in a condition check but is clearly wrong. That would seem to indicate I've written bad data somewhere, but even so what's up with the crazy associated ip address and port number?
  • I'm not using the *_raw network functions so whatever's going on I think the GMS header has to be intact for this to appear, and the information I capture is the same each time it happens. I think this ought to rule out packet corruption.

Any insights on what in the world might be going on would be awesome. I think the next thing I'll try when I get home is simply ignoring any packets with bad metadata, then seeing what information if any is missing in the application that received it. I really don't like the idea of that becoming approach becoming a band-aid though, I'd very much like to get to the bottom of this.

More generally, if packet corruption is possible (whether or not it's happening here) should I be looking out for bad metadata all the time? My understanding is that sort of thing with checksums etc is already taken care of in the transport layer, and that I shouldn't have to worry about it?

Any help appreciated!
Cheers,
Patrick
 
Last edited:

Yal

šŸ§ *penguin noises*
GMC Elder
Networking is not my strong suite, so I'll probably not be very useful here, but I'll try to come up with some ideas since you PM'd me and all.
  • If you store data for multiple different things in a buffer, it sounds like you're asking for trouble :p Are you sure all data have the same length and such so that you won't accidentally corrupt the buffer if you write to it in an unexpected order?
  • Values 0.5 and up are interpreted as true in GML, 0.5-eps and down are interpreted as false. This is pretty nonstandard behavior compared to other languages, so be careful with using floats or anything you're not sure always is integer or enum in conditions.
  • Ignoring malformed input sounds like a pretty good security measure when you don't have control over what data gets input, even if it's a bit of a hack NOW it will make your system more robust once you have a 'real' network where you might get corrupted or malicious packets.
  • Be aware that GM reuses IDs for data structures now, so if you create and destroy ds_maps and stuff manually a lot that could cause your code to think a destroyed map still exists.
  • I'm thinking that ideally (i.e. if possible) you should print out the data AROUND the strangely corrupt data as well, and see if it's valid data but misaligned a few bytes or such.
 

FrostyCat

Redemption Seeker
If TCP and UDP activity are both happening simultaneously, you need to be extra careful about checking async_load[? "id"]. I suspect that one of your TCP calls got processed using a routine for UDP calls.
 

PNelly

Member
I've got a bit more information about the problem, but it's definitely still mysterious.

I wrote a script that inspects every incoming packet for valid metadata, and consistency between what the packet says it contains and what socket it arrived on. If something fishy crops up, it throws a dialogue and saves the relevant variables with the buffer to files so I can inspect them more closely. Below if you want to check it out:
Code:
/// valid_packet(source_ip, source_port, related_socket, buffer)

// check packet meta data and contents against expected values to determine
// whether it should be processed at all

var _ip     = argument0;
var _port   = argument1;
var _socket = argument2;
var _buffer = argument3;

var _valid_rdvz_meta = false;
var _valid_udp_meta  = false;
var _valid_meta_data = false;

// verify metadata
_valid_rdvz_meta = (_ip == rendevouz_ip && _port == rendevouz_tcp_port);

if(!udp_is_host()){
    _valid_udp_meta =((_ip == udp_host_to_join_ip && _port == udp_host_to_join_port)
                    ||(_ip == udp_host_ip && _port == udp_host_port));
} else {
    var _idx, _map, _num;
    _num = ds_list_size(udp_client_list);
    for(_idx=0;_idx<_num;_idx++){ // check clients
        _map = udp_client_maps[? udp_client_list[| _idx]];
        if(_map[? "ip"] ==_ip && _map[? "port"] == _port){
            _valid_udp_meta = true;
            break;
        }
    }
    if(!_valid_udp_meta){
        _num = ds_list_size(udp_hole_punch_list);
        for(_idx=0;_idx<_num;_idx++){ // check hole punch
            _map = udp_hole_punch_maps[? udp_hole_punch_list[| _idx]];
            if(_map[? "ip"] ==_ip && _map[? "port"] == _port){
                _valid_udp_meta = true;
                break;
            }
        }
    }
}

_valid_meta_data = (_valid_rdvz_meta || _valid_udp_meta);

// verify contents
var _bool_u8;
var _msg_id;

var _valid_bool        = false;
var _valid_rdvz_msg_id = false;
var _valid_udp_msg_id  = false;
var _valid_msg_id      = false;

buffer_seek(_buffer,buffer_seek_start,0);
_bool_u8 = buffer_read(_buffer,buffer_u8);
_msg_id  = buffer_read(_buffer,buffer_u16);

_valid_bool = (_bool_u8 == 0 || _bool_u8 == 1);

_valid_rdvz_msg_id = (_msg_id >= rdvz_msg.rdvz_msg_enum_start
                    &&_msg_id <= rdvz_msg.rdvz_msg_enum_end);
_valid_udp_msg_id  = (_msg_id >= udp_msg.udp_msg_enum_start
                    &&_msg_id <= udp_msg.udp_msg_enum_end);
       
_valid_msg_id = (_valid_rdvz_msg_id || _valid_udp_msg_id);

// verify consistency between socket and message id
var _consistent_rdvz = false;
var _consistent_udp  = false;
var _consistent      = false;

_consistent_rdvz = (_valid_rdvz_msg_id && _socket == rdvz_client_socket);
_consistent_udp  = (_valid_udp_msg_id  &&(_socket == udp_client_socket || _socket == udp_host_socket));
_consistent      = (_consistent_rdvz || _consistent_udp);

// check flags and save packet if it doesn't make sense
if(!_valid_meta_data || !_valid_bool || !_valid_msg_id ||!_consistent){
    var _time = current_time;
    buffer_save(_buffer,"badbuffer"+string(_time));
    var _note_file = file_text_open_write("BadBufferNote"+string(_time));
    var _note_string = "Invalid packet detected by rdvz_id "+string(rendevouz_id)+" udp_id "+string(udp_id)+"#"
        +"valid rdvz meta "+string(_valid_rdvz_meta)+" valid udp meta "+string(_valid_udp_meta)
            +" consistent rdvz "+string(_consistent_rdvz)+" consistent udp "+string(_consistent_udp)+"#"
        +"udp is host "+string(udp_is_host())+" rdvz_client_socket "+string(rdvz_client_socket)
            +" udp_host_socket "+string(udp_host_socket)+" udp_client_socket "+string(udp_client_socket)+"#"
        +"event ip "+string(_ip)+" event port "+string(_port)+" event socket "+string(_socket)+"#"
        +"stated bool "+string(_bool_u8)+" stated message "+string(_msg_id);
    file_text_write_string(_note_file,_note_string);
    file_text_close(_note_file);
    debug_error_message = show_message_async(_note_string);
    return false;
}

return true;

I've been able to reproduce the problem a number of times. Each time something goes wrong the context is that multiple udp clients are connecting back to the meet up server (using tcp) after being kicked form a closed udp session, and the invalid packet is always one being sent from the meet up server to a client over the tcp channel:

First example:
Code:
Invalid packet detected by rdvz_id -1 udp_id -1
valid rdvz meta 0 valid udp meta 0 consistent rdvz 1 consistent udp 0
udp is host 0 rdvz_client_socket 0 udp_host_socket -1 udp_client_socket -1
event ip 148.245.24.0 event port 0 event socket 0
stated bool 0 stated message 17
Second example:
Code:
Invalid packet detected by rdvz_id -1 udp_id -1
valid rdvz meta 0 valid udp meta 0 consistent rdvz 0 consistent udp 0
udp is host 0 rdvz_client_socket 0 udp_host_socket -1 udp_client_socket -1
event ip 0.0.0.0 event port 0 event socket 0
stated bool 173 stated message 44990
The negative values mean those variables aren't initialized or were cleared, so no udp identifier or open udp sockets like you'd expect. No meetup id yet because that information hasn't been received from the meetup server. The bool 0 labels the packet as tcp. I've seen 4 different message identifiers in the notices, but each is one sent from the meet up server to a meet up client using tcp.

Every trip has been caused by bizarre IP and port combinations. Namely the IP addresses 148.245.24.0 and 0.0.0.0, both accompanied by port 0.

There's plenty that's strange about this. One thing in particular is that I might have 5 different clients connecting to the meet up server and only 2 of them will throw errors connected to that mystery IP and port (Like I said above, low reproducibility).

Doing some research I can't turn anything up about 148.245.24.0, but 0.0.0.0 does have some significance. "0.0.0.0 is a non-routable meta-address used to designate an invalid, unknown or non-applicable target" (Wikipedia). There's a bit more information here https://www.lifewire.com/four-zero-ip-address-818384 that suggests it servers as a kind of default if a misconfiguration exists.

There's also a little bit of information out there around port 0, which has a part to play in dynamic port allocation (https://www.lifewire.com/port-0-in-tcp-and-udp-818145). If an application wants to bind to a port and sends 0 as a parameter then the system will choose a port for the application. That behavior stretches back to Unix, and if I recall the networking portion of windows is built on top of Unix. Perhaps that's what happens when network_create_socket() is used with a port parameter? That's kind of interesting but I'm not sure how relevant it is, all my socket binds request specific port numbers.

Not a complete picture yet, but both failures I've seen incorporate port 0, which appears to be system related. One of them incorporates the IP address 0.0.0.0, which is also system related. I have no idea what the significance of 148.245.24.0 might be, but I guess three out of four isn't bad.

So, the clients reach out to the meet up server on ip : port 127.0.0.1 : 4843 to create a connection. When a new connection is established the server immediately sends the new client(s) information about other clients and what udp sessions are available. The very first packet receipt after that from the server is problematic (when the bug does manifest) because it's partnered with (what appears to be) system related meta data that doesn't match the port and IP the clients made their connection to. In the majority of cases the ip is 148.245.24.0 accompanying a correctly formatted buffer, and in some other cases the IP is 0.0.0.0 accompanied by incorrect buffer data.

The context is that multiple new tcp clients are trying to establish connections to the server simultaneously. For testing they've either been running on the same machine, or a room or two apart on my home network. Most likely they're making attempts within the same millisecond if not an even narrower span of time.

I don't have anything concrete so I'm stuck speculating, but I wonder if I'm confronting one of the limits of a singly-threaded TCP server? If multiple simultaneous connection requests can lead to a misconfiguration, it would help explain the problem. When I have some time available I'll put together an experiment to see if I can consistently reproduce the problem that way.

Any other insights definitely appreciated.
 
Last edited:

PNelly

Member
Finally made some time to dig back into this a little bit. I ran my meetup server on one machine and several client applications on another machine, setup wireshark to watch packets in transit and then went through the motions in earlier posts to reproduce the problem. I was able to capture the packet data associated with the hiccup and go through it with a hex editor:

Code:
// 12 byte GMS header, whatever's in there
de c0 ad de 0c 00
00 00 a0 00 00 00

// my buffer contents
00                          bool     false is tcp
12 00                     u16    msg id 18 - rdvz_bring_up_to_speed
  
02 00                     u16    2  - your client id
04 00                     u16    4  - num other clients
03 00                     u16    3  - this client id 3
31 32 37 2E 30 2E 30 2E 31 00        string    "127.0.0.1\0" ip of client 3
FF FF FF FF         s32    host port for this client -1
00                          u8    num udp clients hosted by this client 0
07                          u8    max clients 7
FF FF FF FF         s32    client port -1
00                          bool    not in progress
04 00                     u16    client id 4
31 32 37 2E 30 2E 30 2E 31 00         string     "127.0.0.1\0"
FF FF FF FF          s32    host port -1
00                           u8    num clients 0
07                           u8    max clients 7
FF FF FF FF          s32    client port -1
00                           bool    not in progress
02 00                     u16    client id 2
31 32 37 2E 30 2E 30 2E 31 00         string    "127.0.0.1\0"
FF FF FF FF          s32    host port -1
00                           u8    num clients 0
07                           u8    max clients 7
FF FF FF FF          s32    client port -1
00                           bool    not in progress
06 00                      u16    client id 6               
31 32 37 2E 30 2E 30 2E 31 00         string    "127.0.0.1\0"
FF FF FF FF           s32    host port -1
04                            u8    num clients 4 (must not have updated server yet)
07                            u8    max clients 7
FF FF FF FF           s32    client port -1    
00                            bool    not in progress

// remainder of buffer zero-filled:

00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00

So all of the packet contents check out, and Wireshark tells me the packet source is 192.168.0.105:4643, and the destination is 192.168.0.106:57809.

All of the packet info looks just the way it should, but when I'm querying async_load[? "ip"] and asnyc_load[? "port"] in these instances I'm getting the values 148.245.24.0 and 0. Just doesn't make any sense.

I'm getting bad data out of async_load but I have no idea why. If lots of async networking events are happening in close proximity using both udp and tcp could that metadata be corrupted somehow? About all I can do is speculate without knowing a little more about how the it works under the hood.

Of course, any insights or thoughts on what else to check out are appreciated.
 

Tsa05

Member
You aren't in Mexico, by any chance, are you? Could be your piblic-facing IP ;P
Other thing to consider, though--remember that GameMaker fires an event whenever a transmission is received on a port that it's watching. This does *not* guarantee that the transmission game from a GameMaker game client that you programmed. Is it possible that your IP is ocasionally getting pinged on various ports by the vast world of spambots? If you're punching through your NAT, then probe traffic would be getting through, and it's up to your game to know how to deal with invalid data as well as valid data.

Otherwise, if it's an error with your code, gonna be hard to track down without code!
 

PNelly

Member
You aren't in Mexico, by any chance, are you? Could be your piblic-facing IP ;P
Other thing to consider, though--remember that GameMaker fires an event whenever a transmission is received on a port that it's watching. This does *not* guarantee that the transmission game from a GameMaker game client that you programmed. Is it possible that your IP is ocasionally getting pinged on various ports by the vast world of spambots? If you're punching through your NAT, then probe traffic would be getting through, and it's up to your game to know how to deal with invalid data as well as valid data.

Otherwise, if it's an error with your code, gonna be hard to track down without code!
Hey Tsa05, thanks for checking this out. And no, (lolz), I don't believe I am seeing bot packets as all of these particular tests are inside of my home network. Moreover, I've been able to verify with wire shark that the packet coupled with the problem has a valid payload, and with a hex editor that it has the correct source and destination information. Last, each time a problem arises because an incorrect source IP/port is reported, it is the same incorrect source IP/port. 148.245.24.0 and port 0.

You're right, I should've provided more code to support my perspective a long time ago. This is the script that picks up on the problem. Fails the very first test, which I've marked.
Code:
/// valid_packet(source_ip, source_port, related_socket, buffer)

// check packet meta data and contents against expected values to determine
// whether it should be processed at all

var _ip     = argument0;
var _port   = argument1;
var _socket = argument2;
var _buffer = argument3;
var _valid_rdvz_meta = false;
var _valid_udp_meta  = false;
var _valid_meta_data = false;

/* <<<<<<<<<<<<<<<< Gets Flagged Here >>>>>>>>>>>>>>>>> */
// verify metadata
_valid_rdvz_meta = (_ip == rendevouz_ip && _port == rendevouz_tcp_port); // <<<<<<<

if(!udp_is_host()){ 
    _valid_udp_meta =((_ip == udp_host_to_join_ip && _port == udp_host_to_join_port)
                    ||(_ip == udp_host_ip && _port == udp_host_port));
} else {
    var _idx, _map, _num;
    _num = ds_list_size(udp_client_list);
    for(_idx=0;_idx<_num;_idx++){ // check clients
        _map = udp_client_maps[? udp_client_list[| _idx]];
        if(_map[? "ip"] ==_ip && _map[? "port"] == _port){
            _valid_udp_meta = true;
            break; 
        }
    }
    if(!_valid_udp_meta){
        _num = ds_list_size(udp_hole_punch_list);
        for(_idx=0;_idx<_num;_idx++){ // check hole punch
            _map = udp_hole_punch_maps[? udp_hole_punch_list[| _idx]];
            if(_map[? "ip"] ==_ip && _map[? "port"] == _port){
                _valid_udp_meta = true;
                break; 
            }
        }
    }
}

_valid_meta_data = (_valid_rdvz_meta || _valid_udp_meta);

// verify contents
var _bool_u8;
var _msg_id;
var _checksumA, _checksumB;
var _valid_bool        = false;
var _valid_rdvz_msg_id = false;
var _valid_udp_msg_id  = false;
var _valid_msg_id      = false;
var _valid_checksum    = false;

buffer_seek(_buffer,buffer_seek_start,0);
_bool_u8    = buffer_read(_buffer,buffer_u8);
_valid_bool = (_bool_u8 == 0 || _bool_u8 == 1);
_msg_id     = buffer_read(_buffer,buffer_u16);

if(_bool_u8 == 1){ // is udp and contains checksum
    _checksumA  = buffer_read(_buffer,buffer_u32);
    _checksumB  = buffer_checksum(udp_header_size,_buffer);
    _valid_checksum = (_checksumA == _checksumB);
} else if (_bool_u8 == 0){
    _valid_checksum = true; // no checksum on tcp
}

_valid_rdvz_msg_id = (_msg_id >= rdvz_msg.rdvz_msg_enum_start
                    &&_msg_id <= rdvz_msg.rdvz_msg_enum_end);
_valid_udp_msg_id  = (_msg_id >= udp_msg.udp_msg_enum_start
                    &&_msg_id <= udp_msg.udp_msg_enum_end);
                 
_valid_msg_id = (_valid_rdvz_msg_id || _valid_udp_msg_id);

// verify consistency between socket and message id
var _consistent_rdvz = false;
var _consistent_udp  = false;
var _consistent      = false;
_consistent_rdvz = (_valid_rdvz_msg_id && _socket == rdvz_client_socket);
_consistent_udp  = (_valid_udp_msg_id  &&(_socket == udp_client_socket || _socket == udp_host_socket));
_consistent      = (_consistent_rdvz || _consistent_udp);

/* <<<<<<<<<<< work around I created for this problem >>>>>>>>>>>>> */

// Lookout for bad information from rdvz server and re-establish connection
if(_socket == rdvz_client_socket && _port == 0){
    system_message_set("got bad data from meetup server, attempting reconnect");
    rdvz_client_setup_reconnect();
    return false;
}

// check flags and save packet if it doesn't make sense (Have to remove above return statement)
if(!_valid_meta_data || !_valid_bool || !_valid_msg_id || !_consistent || !_valid_checksum){
    var _time = current_time;
    buffer_save(_buffer,"badbuffer"+string(_time));
    var _note_file = file_text_open_write("BadBufferNote"+string(_time));
    var _note_string = "Invalid packet detected by rdvz_id "+string(rendevouz_id)+" udp_id "+string(udp_id)+"#"
        +"valid rdvz meta "+string(_valid_rdvz_meta)+" valid udp meta "+string(_valid_udp_meta)
            +" consistent rdvz "+string(_consistent_rdvz)+" consistent udp "+string(_consistent_udp)+"#"
        +"udp is host "+string(udp_is_host())+" rdvz_client_socket "+string(rdvz_client_socket)
            +" udp_host_socket "+string(udp_host_socket)+" udp_client_socket "+string(udp_client_socket)+"#"
        +"event ip "+string(_ip)+" event port "+string(_port)+" event socket "+string(_socket)+"#"
        +"stated bool "+string(_bool_u8)+" stated message "+string(_msg_id)+"#"
        +"checksum A +"+string(_checksumA)+" checksum B "+string(_checksumB);
    file_text_write_string(_note_file,_note_string);
    file_text_close(_note_file);
    show_message_async(_note_string);
    return false;
}

// packet is valid
return true;

that valid_packet() is called on every packet receipt. If a bad packet is detected the data processing script received_packet() bails out to avoid crashing the program.
Code:
/// received_packet(buffer,size,ip,port,socket)

var _buffer = argument0;
var _size   = argument1;
var _ip     = argument2;
var _port   = argument3;
var _socket = argument4;
var _is_udp, _msg_id, _checksum, _udpr_id, _sqn;
var _udpr_received, _valid_sqn, _sender_udp_id;

        // -- // Check Packet Integrity // -- //
     
if(!valid_packet(_ip,_port,_socket,_buffer)) exit;

// lots of packet processing stuff beneath...

received_packet() is called from handle_network_actions(), which is just a wrapper for the network async event:
Code:
/// handle_network_actions()
// facilitates network event for this architecture
var _type       = async_load[? "type"];
var _socket_id  = async_load[? "id"];
var _ip         = async_load[? "ip"];
var _port       = async_load[? "port"];
var _buffer, _size;

switch (_type) {
    case network_type_connect:
        exit;
    break;
 
    case network_type_disconnect:
        exit;
    break;
 
    case network_type_data:
        _buffer     = async_load[? "buffer"];
        _size       = async_load[? "size"];
        received_packet(_buffer,_size,_ip,_port,_socket_id);
    break;
 
    case network_type_non_blocking_connect:
        exit;
    break;
}

The IP and port for the rendevouz server is hardcoded into the clients inside of their create event:
Code:
rendevouz_ip = "192.168.1.6";
rendevouz_tcp_port = 4643;
If you follow the passing of the parameters starting from handle_network_actions() and arriving at the line
Code:
_valid_rdvz_meta = (_ip == rendevouz_ip && _port == rendevouz_tcp_port);
where the problem is detected, you can see that the contents of "_ip" and "_port" come straight from async_load[? "ip"] and async_load[? "port"]. So I am at a loss as to where the mismatched ip/port pair is coming from.

Sorry for the delay, it takes me a long time to write posts sometimes and I'll end up putting it off. Grateful for any insights you might have.

@FrostyCat , I should've mentioned a long time ago that the socket id isn't used in a way that's related to the problem. I appreciate you taking the time to look.

@Yal, I ought to have thanked you earlier as well. I took your last suggestion to heart and ended up creating the stuff that's gotten me the extra information that I do have about the problem now.
 
Top