🛰️Zafer Satılmış - Aviora

AppTcpConnManager

A dedicated RTOS task (tcpConnectionThread) multiplexes one outbound “push” TCP client to the head-end and one local “pull” TCP listener for inbound commands. Both sides are driven with select() on non-blocking / multiplexed sockets and a short poll timeout so the loop stays responsive without blocking the whole stack.

Channels And Callback

  • PushPUSH_TCP_SOCK_NAME is "push": the gateway connects to the configured server IP/port (ident, alive, outbound payloads; server data is read on the same socket).
  • PullPULL_TCP_SOCK_NAME is "pull": the gateway bind/listens on the local pull port; a head-end or test tool connects as a client.
  • IncomingMsngCb_tvoid (*)(const char *channel, const char *data, unsigned int dataLength); channel is "push" or "pull" so upper layers can dispatch JSON or TLV parsers.

BSP select() Multiplexing

The connection thread uses the socket SELECT macro (same role as BSD select): build fd_set read masks (and write mask for push while the non-blocking connect finishes), call SELECT(max_fd+1, &readfds, …, &timeout) with timeout.tv_usec = 10000 (10 ms) and tv_sec = 0. That timeout is one “tick” for idle processing: when select returns 0 (no fd ready), pull-side keep-alive counters advance. Push and pull blocks are evaluated in sequence inside the infinite loop.

High-Level Thread Loop
%%{init: {'flowchart': {'nodeSpacing': 12, 'rankSpacing': 110, 'padding': 10}, 'themeVariables': {'fontSize': '13px'}}}%%
flowchart TD
  L[Loop forever] --> T{taskKeepDisconnect AND sockets open?}
  T -->|yes| Q[Queue close push + pull]
  T -->|no| P[Process task flags]
  P --> F1[taskPullSocCreat → createPullSocket LISTEN]
  P --> F2[taskClosePullSock → closePullSocket]
  P --> F3[taskConnectPush → connectPushSocket NONBLOCK]
  P --> F4[taskClosePushSock → disconnectPushSocket]
  Q --> PULL[Pull block: SELECT listen + clients]
  PULL --> PUSH[Push block: SELECT read + write until connected]
  PUSH --> L

Pull Side: Listen, Accept, And Idle Timeout

When the pull listener exists (pullSockID > 0), FD_SET includes the listen socket and every active accepted client in readfds. ACCEPT runs when the listen fd is readable; new connections are stored in the first free slot of clientSocList[]. If all MAX_PULL_CLIENT_NUMBER slots are busy, the new socket is closed immediately (“Client List full”).

Per-client activity is tracked in clietKeepAliveCounter[i] (spelling matches the source). On any received data, the counter for that slot is reset to 0. When select returns 0 (timeout, no read event on pull fds), each connected client increments its counter by one. If clietKeepAliveCounter[i] > 36000, the implementation closes that client: 36000 × 10 ms ≈ 360 s ≈ 6 minutes without receiving data. The comment in code ties this to idle select iterations, not calendar wall-clock alone if other work delays the loop, but the intended duration is ~6 minutes of no payload on that pull client.

Constants: MAX_PULL_CLIENT_NUMBER is 1 in the current header — one simultaneous pull client. LISTEN backlog is 1.
Pull — Select and Accept
%%{init: {'flowchart': {'nodeSpacing': 14, 'rankSpacing': 16, 'padding': 4}, 'themeVariables': {'fontSize': '11px'}}}%%
flowchart TD
  S[SELECT readfds: listen + client fds] --> R{activity > 0?}
  R -->|yes| A{listen fd readable?}
  A -->|yes| AC[ACCEPT new client]
  AC --> SL{Free slot in clientSocList?}
  SL -->|yes| AD[Store fd, keepAlive = 0]
  SL -->|no| RJ[Close new socket — full]
  A -->|no| RD[For each client fd readable]
  RD --> RECV[RECV data]
  RECV -->|size > 0| CB[incomingMsngCb pull]
  RECV -->|else| CLS[Close client]
  CB --> Z[keepAlive counter = 0]
  R -->|no activity == 0| IDLE[For each connected client: keepAlive++]
  IDLE --> TO{keepAlive > 36000?}
  TO -->|yes| KILL[Close client — idle ~6 min]
  TO -->|no| S

Push Side: Non-Blocking Connect Then Blocking I/O

connectPushSocket creates the TCP socket and sets O_NONBLOCK before CONNECT. While the handshake is in flight, pushSockConnecting is set; the thread waits on select with the socket in writefds until the connection completes, then checks SO_ERROR. On success the code clears non-blocking: F_SETFL with fl & ~O_NONBLOCK for normal send/receive behaviour. Incoming server data uses RECV on read readiness and calls incomingMsngCb("push", …).

Push — select read / write
%%{init: {'flowchart': {'nodeSpacing': 16, 'rankSpacing': 18, 'padding': 4}, 'themeVariables': {'fontSize': '11px'}}}%%
flowchart TD
  PS[pushSockID > 0] --> W{pushSockConnecting?}
  W -->|yes| SEL[SELECT readfds + writefds]
  W -->|no| SELR[SELECT readfds only]
  SEL --> WR{writable AND connecting?}
  WR -->|yes| OK[GETSOCKOPT SO_ERROR]
  OK --> BL[If OK: clear NONBLOCK]
  SEL --> RD[If readable: RECV → incomingMsngCb push]
  SELR --> RD

Connect And Disconnect: Request Functions

The worker thread does not expose blocking connect/disconnect calls. Instead, bit flags in gs_taskList are set from public APIs; the loop performs the actual socket work when safe.

APIEffect
appTcpConnManagerRequestConnect Sets taskKeepDisconnect = false and taskPullSocCreat = true so the listener can be created and operation is allowed.
appTcpConnManagerRequestDisconnect Sets taskKeepDisconnect = true; when sockets are still open the loop queues taskClosePushSock and taskClosePullSock to tear down both sides.
appTcpConnManagerRequestPushConnect Sets taskConnectPush = true to start (or continue) the outbound push connection.
appTcpConnManagerRequestPushDisconnect Sets taskClosePushSock = true to close only the push socket.

Poll the state with appTcpConnManagerIsConnectedPush() (true when pushSockID > 0 and connect handshake finished), appTcpConnManagerIsPullReady(), and appTcpConnManagerAnyPullClient().

Request Flags → Socket Actions
flowchart LR
  RC[RequestConnect] --> F1[taskPullSocCreat]
  RD[RequestDisconnect] --> F2[taskClosePush + taskClosePull]
  RPC[RequestPushConnect] --> F3[taskConnectPush]
  RPD[RequestPushDisconnect] --> F4[taskClosePush]
  F1 --> TH[tcpConnectionThread applies in loop]
  F2 --> TH
  F3 --> TH
  F4 --> TH

Send Path

appTcpConnManagerSend(channel, data, len) targets "push" via SEND on the push fd when connected; if not connected it may set taskConnectPush for a deferred open. For "pull" it sends on clientSocList[0] when the pull listener and client exist — consistent with MAX_PULL_CLIENT_NUMBER == 3.

Lifecycle: Start And Stop

appTcpConnManagerStart stores server IP, server port, pull port, and the callback, then creates the TcpConn task. appTcpConnManagerStop closes pull and push, clears task flags, and deletes the task.

Key API Summary

SymbolRole
IncomingMsngCb_tCallback for inbound bytes on "push" or "pull".
appTcpConnManagerStart(ip, serverPort, pullPort, cb)Spawn TCP manager task.
appTcpConnManagerStop()Shutdown sockets and task.
appTcpConnManagerRequestConnect / RequestDisconnectEnable pull creation + allow link vs full teardown.
appTcpConnManagerRequestPushConnect / RequestPushDisconnectPush-only connect or close.
appTcpConnManagerSend(ch, data, len)Send on push or pull channel.
appTcpConnManagerIsConnectedPushPush connected and not mid–non-blocking handshake.