Code Monkey home page Code Monkey logo

ipt2socks's Introduction

ipt2socks(libev)

类似 redsocksredsocks2 的实用工具,用于将 iptables(REDIRECT/TPROXY) 流量转换为 socks5(tcp/udp) 流量,除此之外不提供任何非必要功能。

ipt2socks 可以为仅支持 socks5 传入协议的“本地代理”提供 iptables 透明代理 传入协议的支持,比如 ss/ssr 的 ss-local/ssr-local、v2ray 的 socks5 传入协议、trojan 的 socks5 客户端等等。

简要说明

  • 使用 splice() 系统调用,理想情况下可实现零拷贝。
  • IPv4 和 IPv6 双栈支持,支持 纯 TPROXY 透明代理模式。
  • TCP 透明代理提供 REDIRECT、TPROXY 两种方式,UDP 透明代理为 TPROXY 方式。
  • UDP 透明代理支持 Full Cone NAT,前提是后端的 socks5 服务器支持 Full Cone NAT。
  • 多线程 + SO_REUSEPORT 端口重用,每个线程运行各自独立的事件循环,性能提升显著。

如何编译

为了方便使用,releases 页面发布了 linux 下常见架构的 musl 静态链接二进制。

git clone https://github.com/zfl9/ipt2socks
cd ipt2socks
make && sudo make install

ipt2socks 默认安装到 /usr/local/bin/ipt2socks,可安装到其它目录,如 make install DESTDIR=/opt/local/bin

交叉编译时只需指定 CC 变量,如 make CC=aarch64-linux-gnu-gcc(若报错或异常,请执行 make clean,再试)。

如何运行

# -s 指定 socks5 服务器 ip
# -p 指定 socks5 服务器端口
ipt2socks -s 127.0.0.1 -p 1080

# 如果想后台运行,可以这样启动:
(ipt2socks -s 127.0.0.1 -p 1080 </dev/null &>>/var/log/ipt2socks.log &)

ipt2socks 启动后,配置相应 iptables/nftables 规则即可,关于 iptables 规则,可以看看:

全部参数

$ ipt2socks --help
usage: ipt2socks <options...>. the existing options are as follows:
 -s, --server-addr <addr>           socks5 server ip, default: 127.0.0.1
 -p, --server-port <port>           socks5 server port, default: 1080
 -a, --auth-username <user>         username for socks5 authentication
 -k, --auth-password <passwd>       password for socks5 authentication
 -b, --listen-addr4 <addr>          listen ipv4 address, default: 127.0.0.1
 -B, --listen-addr6 <addr>          listen ipv6 address, default: ::1
 -l, --listen-port <port>           listen port number, default: 60080
 -S, --tcp-syncnt <cnt>             change the number of tcp syn retransmits
 -c, --cache-size <size>            udp context cache maxsize, default: 256
 -o, --udp-timeout <sec>            udp context idle timeout, default: 60
 -j, --thread-nums <num>            number of the worker threads, default: 1
 -n, --nofile-limit <num>           set nofile limit, may need root privilege
 -u, --run-user <user>              run as the given user, need root privilege
 -T, --tcp-only                     listen tcp only, aka: disable udp proxy
 -U, --udp-only                     listen udp only, aka: disable tcp proxy
 -4, --ipv4-only                    listen ipv4 only, aka: disable ipv6 proxy
 -6, --ipv6-only                    listen ipv6 only, aka: disable ipv4 proxy
 -R, --redirect                     use redirect instead of tproxy for tcp
 -r, --reuse-port                   enable so_reuseport for single thread
 -w, --tfo-accept                   enable tcp_fastopen for server socket
 -W, --tfo-connect                  enable tcp_fastopen for client socket
 -v, --verbose                      print verbose log, affect performance
 -V, --version                      print ipt2socks version number and exit
 -h, --help                         print ipt2socks help information and exit
  • -s选项:socks5 服务器的 IP 地址,默认为 127.0.0.1。
  • -p选项:socks5 服务器的监听端口,默认为 1080。
  • -a选项:socks5 代理认证的用户(若需要认证)。
  • -k选项:socks5 代理认证的密码(若需要认证)。
  • -b选项:本地 IPv4 监听地址,默认为 127.0.0.1。
  • -B选项:本地 IPv6 监听地址,默认为 ::1。
  • -l选项:本地 IPv4/6 监听端口,默认为 60080。
  • -S选项:与 socks5 服务器建立 TCP 连接的超时参数。
  • -c选项:UDP 上下文的最大数量,默认为 256 个。
  • -o选项:UDP 上下文的超时时间,默认为 60 秒。
  • -j选项:需要启动的工作线程数量,默认为单个线程。
  • -n选项:设置 ipt2socks 进程可打开的文件描述符限制。
  • -u选项:即 run-as-user 功能,需要 root 权限才能生效。
  • -T选项:仅启用 TCP 透明代理,也即关闭 UDP 透明代理。
  • -U选项:仅启用 UDP 透明代理,也即关闭 TCP 透明代理。
  • -4选项:仅启用 IPv4 透明代理,也即关闭 IPv6 透明代理。
  • -6选项:仅启用 IPv6 透明代理,也即关闭 IPv4 透明代理。
  • -R选项:使用 REDIRECT(DNAT) 而非 TPROXY(针对 TCP)。
  • -r选项:若指定,则即使是单线程模式,也设置端口重用。
  • -w选项:启用服务端的 TCP_Fast_Open(应设好内核参数)。
  • -W选项:启用客户端的 TCP_Fast_Open(应设好内核参数)。
  • -v选项:若指定此选项,则将会打印较为详尽的运行时日志。

以普通用户运行

  • sudo setcap cap_net_bind_service,cap_net_admin+ep /usr/local/bin/ipt2socks
  • 如果以 root 用户启动 ipt2socks,也可以指定 -u nobody 选项切换至 nobody 用户

ipt2socks's People

Contributors

zfl9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ipt2socks's Issues

compile errors in alpine linux v3.12

Here are the error messages while compling:

gcc -w -O2 -c libev/ev.c -o ev.o
gcc -std=c99 -Wall -Wextra -O2 -pthread -c ipt2socks.c -o ipt2socks.o
ipt2socks.c:40:28: error: unknown type name '__off64_t'; did you mean 'off64_t'?
40 | ssize_t splice(int fdin, __off64_t *offin, int fdout, __off64_t *offout, size_t len, unsigned int flags) {
| ^~~~~~~~~
| off64_t
ipt2socks.c:40:57: error: unknown type name '__off64_t'; did you mean 'off64_t'?
40 | ssize_t splice(int fdin, __off64_t *offin, int fdout, __off64_t *offout, size_t len, unsigned int flags) {
| ^~~~~~~~~
| off64_t
ipt2socks.c: In function 'tcp_stream_payload_forward_cb':
ipt2socks.c:743:25: warning: implicit declaration of function 'splice' [-Wimplicit-function-declaration]
743 | ssize_t nrecv = splice(self_watcher->fd, NULL, self_pipefd[1], NULL, TCP_SPLICE_MAXLEN, SPLICE_F_MOVE | SPLICE_F_NONBLOCK);
| ^~~~~~
make: *** [Makefile:28: ipt2socks.o] Error 1

gcc version is gcc (Alpine 9.3.0), I've installed libev and libev-dev from alpine package repository.

关于 -R选项 一个问题

-R选项:使用 REDIRECT(DNAT) 而非 TPROXY(针对 TCP)。
-R 选项为什么会让 -b 127.0.0.1 设置无效,直接绑定0.0.0.0?
我记得REDIRECT同样可以转发数据到127.0.0.1,只是要设置内核参数
net.ipv4.conf.all.route_localnet=1

[udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address in use

我的UDP又不通了,TCP正常。

目前的環境是Openwrt 19.07.4,Socks是trojan 1.16.0。
今年二月測試openwrt + trojan + ipt2socks + tproxy 的UDP可以透明代理。
把trojan跟ipt2socks的版本往二月當時的版本換還是不行,不知問題出在哪。

可不可以給些建議?謝謝

ipt2socks報錯信息是:
2020-09-24 15:33:34 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address in use

config ipt2socks
option enable '1'
option server_addr '127.0.0.1'
option server_port '1081'
option auth_username ''
option auth_password ''
option listen_addr4 '0.0.0.0'
option listen_addr6 '::1'
option listen_port '12349'
option tcp_syncnt ''
option cache_size '512'
option udp_timeout '60'
option thread_nums '4'
option nofile_limit '65535'
option tcp_only '0'
option udp_only '0'
option ipv4_only '1'
option ipv6_only '0'
option redirect '0'
option reuse_port '1'
option tfo_accept '0'
option tfo_connect '0'

ERR: [udp_socks5_recv_tcpmessage_cb] recv unknown msg from socks5 server, release ctx

我目前正在 docker 方式部署 ss-tproxy,其中

  • ss-tproxy [4.8](含 chinadns-ng [2024.03.27])部署在 A 容器中
  • ipt2socks 部署在 B 容器中,且以 proxy 用户组(gid 9999)运行
  • trojan-go 部署在 C 容器中,且以 proxy 用户组(gid 9999)运行
  • 这三个容器均使用 host network,且共享网络(利用 --network container:name指定)

测试命令 curl http://1.1.1.1 正常,ipt2socks 和 trojan-go 无报错或异常 log,且能得到结果(一个 301 Moved Permanently)。
测试命令 nslookup ip.sb 1.1.1.1 报错,log 如下。
使用 nsloopup 命令指定 chinadns-ng 监听的端口(127.0.0.1:60053)进行 dns 查询,发现了相同的错误,log 也几乎相同。

烦请帮忙排查,感谢。

chinadns-ng log:

2024-04-04 17:30:26 I [server.zig:302 QueryLog.query] query(id:4964, tag:none, qtype:1, 'ip.sb') from ::ffff:127.0.0.1#52228
2024-04-04 17:30:26 I [server.zig:349 QueryLog.forward] forward query(qid:7, from:udp, 'ip.sb') to trust group
2024-04-04 17:30:26 I [Upstream.zig:490 Group.send] forward query(qid:7, from:udp) to upstream udpin://8.8.8.8
2024-04-04 17:30:26 I [Upstream.zig:490 Group.send] forward query(qid:7, from:udp) to upstream udpin://1.1.1.1
2024-04-04 17:30:26 I [Upstream.zig:490 Group.send] forward query(qid:7, from:udp) to upstream udpin://2001:4860:4860::8888
2024-04-04 17:30:26 I [Upstream.zig:490 Group.send] forward query(qid:7, from:udp) to upstream udpin://2606:4700:4700::1111
2024-04-04 17:30:31 I [server.zig:302 QueryLog.query] query(id:4964, tag:none, qtype:1, 'ip.sb') from ::ffff:127.0.0.1#40375
2024-04-04 17:30:31 I [server.zig:349 QueryLog.forward] forward query(qid:8, from:udp, 'ip.sb') to trust group
2024-04-04 17:30:31 I [Upstream.zig:490 Group.send] forward query(qid:8, from:udp) to upstream udpin://8.8.8.8
2024-04-04 17:30:31 I [Upstream.zig:490 Group.send] forward query(qid:8, from:udp) to upstream udpin://1.1.1.1
2024-04-04 17:30:31 I [Upstream.zig:490 Group.send] forward query(qid:8, from:udp) to upstream udpin://2001:4860:4860::8888
2024-04-04 17:30:31 I [Upstream.zig:490 Group.send] forward query(qid:8, from:udp) to upstream udpin://2606:4700:4700::1111
2024-04-04 17:30:31 W [server.zig:827 on_timeout] query(qid:7, id:4964, tag:none) from udp://::ffff:127.0.0.1#52228 [timeout]
2024-04-04 17:30:36 W [server.zig:827 on_timeout] query(qid:8, id:4964, tag:none) from udp://::ffff:127.0.0.1#40375 [timeout]
2024-04-04 17:30:36 I [server.zig:302 QueryLog.query] query(id:4964, tag:none, qtype:1, 'ip.sb') from ::ffff:127.0.0.1#45705
2024-04-04 17:30:36 I [server.zig:349 QueryLog.forward] forward query(qid:9, from:udp, 'ip.sb') to trust group
2024-04-04 17:30:36 I [Upstream.zig:490 Group.send] forward query(qid:9, from:udp) to upstream udpin://8.8.8.8
2024-04-04 17:30:36 I [Upstream.zig:490 Group.send] forward query(qid:9, from:udp) to upstream udpin://1.1.1.1
2024-04-04 17:30:36 I [Upstream.zig:490 Group.send] forward query(qid:9, from:udp) to upstream udpin://2001:4860:4860::8888
2024-04-04 17:30:36 I [Upstream.zig:490 Group.send] forward query(qid:9, from:udp) to upstream udpin://2606:4700:4700::1111
2024-04-04 17:30:41 W [server.zig:827 on_timeout] query(qid:9, id:4964, tag:none) from udp://::ffff:127.0.0.1#45705 [timeout]

在 ipt2socks 容器中报如下错误:

ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] recv from xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx#34304, nrecv:23
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] try to connect to 127.0.0.1#1080 ...
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] recv from xx.xx.x.xxx#59313, nrecv:23
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] try to connect to 127.0.0.1#1080 ...
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_connect_cb] connect to 127.0.0.1#1080 succeeded
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_send_authreq_cb] send to 127.0.0.1#1080, nsend:3
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_connect_cb] connect to 127.0.0.1#1080 succeeded
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_send_authreq_cb] send to 127.0.0.1#1080, nsend:3
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] recv from xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx#43805, nrecv:23
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] try to connect to 127.0.0.1#1080 ...
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] recv from xx.xx.x.xxx#34522, nrecv:23
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_tproxy_recvmsg_cb] try to connect to 127.0.0.1#1080 ...
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_connect_cb] connect to 127.0.0.1#1080 succeeded
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_send_authreq_cb] send to 127.0.0.1#1080, nsend:3
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_connect_cb] connect to 127.0.0.1#1080 succeeded
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_send_authreq_cb] send to 127.0.0.1#1080, nsend:3
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] recv from 127.0.0.1#1080, nrecv:2
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] send to 127.0.0.1#1080, nsend:10
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] recv from 127.0.0.1#1080, nrecv:2
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] send to 127.0.0.1#1080, nsend:22
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] recv from 127.0.0.1#1080, nrecv:2
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] send to 127.0.0.1#1080, nsend:10
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] recv from 127.0.0.1#1080, nrecv:2
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_authresp_cb] send to 127.0.0.1#1080, nsend:22
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] recv from 127.0.0.1#1080, nrecv:10
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] send to 2001:4860:4860::8888#53, nsend:45
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] recv from 127.0.0.1#1080, nrecv:10
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] send to 1.1.1.1#53, nsend:33
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] recv from 127.0.0.1#1080, nrecv:10
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] send to 2606:4700:4700::1111#53, nsend:45
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] recv from 127.0.0.1#1080, nrecv:10
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_recv_proxyresp_cb] send to 8.8.8.8#53, nsend:33
ipt2socks-1  | 2024-04-04 17:30:26 ERR: [udp_socks5_recv_tcpmessage_cb] recv unknown msg from socks5 server, release ctx
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_context_timeout_cb] context will be released, reason: manual
ipt2socks-1  | 2024-04-04 17:30:26 ERR: [udp_socks5_recv_tcpmessage_cb] recv unknown msg from socks5 server, release ctx
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_context_timeout_cb] context will be released, reason: manual
ipt2socks-1  | 2024-04-04 17:30:26 ERR: [udp_socks5_recv_tcpmessage_cb] recv unknown msg from socks5 server, release ctx
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_context_timeout_cb] context will be released, reason: manual
ipt2socks-1  | 2024-04-04 17:30:26 ERR: [udp_socks5_recv_tcpmessage_cb] recv unknown msg from socks5 server, release ctx
ipt2socks-1  | 2024-04-04 17:30:26 INF: [udp_socks5_context_timeout_cb] context will be released, reason: manual
ipt2socks-1  | 2024-04-04 17:30:31 INF: [udp_tproxy_recvmsg_cb] recv from xx.xx.x.xxx#59313, nrecv:23
ipt2socks-1  | 2024-04-04 17:30:31 INF: [udp_tproxy_recvmsg_cb] try to connect to 127.0.0.1#1080 ...
ipt2socks-1  | 2024-04-04 17:30:31 INF: [udp_socks5_connect_cb] connect to 127.0.0.1#1080 succeeded
ipt2socks-1  | 2024-04-04 17:30:31 INF: [udp_socks5_send_authreq_cb] send to 127.0.0.1#1080, nsend:3
ipt2socks-1  | 2024-04-04 17:30:31 INF: [udp_tproxy_recvmsg_cb] recv from xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx#34304, nrecv:23

且在 trojan-go 容器中打印如下日志:

trojan-1  | [INFO]  2024/04/04 17:30:26 socks connection from 127.0.0.1:39880 metadata [::]:0
trojan-1  | [INFO]  2024/04/04 17:30:26 socks connection from 127.0.0.1:39888 metadata 0.0.0.0:0
trojan-1  | [INFO]  2024/04/04 17:30:26 socks connection from 127.0.0.1:39896 metadata [::]:0
trojan-1  | [INFO]  2024/04/04 17:30:26 socks connection from 127.0.0.1:39902 metadata 0.0.0.0:0
trojan-1  | [INFO]  2024/04/04 17:30:31 socks connection from 127.0.0.1:57196 metadata 0.0.0.0:0
trojan-1  | [INFO]  2024/04/04 17:30:31 socks connection from 127.0.0.1:57202 metadata [::]:0
trojan-1  | [INFO]  2024/04/04 17:30:31 socks connection from 127.0.0.1:57186 metadata 0.0.0.0:0
trojan-1  | [INFO]  2024/04/04 17:30:31 socks connection from 127.0.0.1:57194 metadata [::]:0
trojan-1  | [INFO]  2024/04/04 17:30:36 socks connection from 127.0.0.1:57230 metadata 0.0.0.0:0
trojan-1  | [INFO]  2024/04/04 17:30:36 socks connection from 127.0.0.1:57222 metadata [::]:0
trojan-1  | [INFO]  2024/04/04 17:30:36 socks connection from 127.0.0.1:57240 metadata [::]:0
trojan-1  | [INFO]  2024/04/04 17:30:36 socks connection from 127.0.0.1:57210 metadata 0.0.0.0:0

ERROR: package/feeds/helloworld/ipt2socks failed to build.

编译过程出现错误
make[2] -C feeds/helloworld/chinadns-ng download
make[2] -C feeds/helloworld/dns2tcp download
make[2] -C feeds/helloworld/dns2socks download
make[2] -C feeds/helloworld/hysteria download
make[2] -C feeds/helloworld/ipt2socks download
make[2] -C feeds/helloworld/lua-neturl download
make[2] -C feeds/helloworld/microsocks download
make[2] -C feeds/helloworld/luci-app-ssr-plus download
make[2] -C feeds/helloworld/redsocks2 download
make[2] -C feeds/helloworld/naiveproxy download
make[2] -C feeds/helloworld/shadowsocksr-libev download
make[2] -C feeds/helloworld/simple-obfs download
make[2] -C feeds/helloworld/tcping download
make[2] -C feeds/helloworld/trojan download
make[2] -C feeds/helloworld/v2ray-plugin download
ERROR: package/feeds/helloworld/ipt2socks failed to build.
make[2] -C feeds/helloworld/xray-core download
make package/download: build failed. Please re-run make with -j1 V=s or V=sc for a higher verbosity level to see what's going on
make[1] target/download
make[2] -C target/linux download

make[3]: Entering directory '/workdir/openwrt/feeds/helloworld/ipt2socks'
mkdir -p /workdir/openwrt/dl
SHELL= flock /workdir/openwrt/tmp/.ipt2socks-1.1.3.tar.gz.flock -c ' /workdir/openwrt/scripts/download.pl "/workdir/openwrt/dl" "ipt2socks-1.1.3.tar.gz" "73a2498dc95934c225d358707e7f7d060b5ce81aa45260ada09cbd15207d27d1" "" "https://codeload.github.com/zfl9/ipt2socks/tar.gz/v1.1.3?" '

  • curl -f --connect-timeout 20 --retry 5 --location https://codeload.github.com/zfl9/ipt2socks/tar.gz/v1.1.3?/ipt2socks-1.1.3.tar.gz
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    100 109k 100 109k 0 0 2545k 0 --:--:-- --:--:-- --:--:-- 2545k
    Hash of the downloaded file does not match (file: 5279eb1cb7555cf9292423cc9f672dc43e6e214b3411a6df26a6a1cfa59d88b7, requested: 73a2498dc95934c225d358707e7f7d060b5ce81aa45260ada09cbd15207d27d1) - deleting download.

  • curl -f --connect-timeout 20 --retry 5 --location https://sources.cdn.openwrt.org/ipt2socks-1.1.3.tar.gz
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    curl: (22) The requested URL returned error: 404
    Download failed.

  • curl -f --connect-timeout 20 --retry 5 --location https://sources.openwrt.org/ipt2socks-1.1.3.tar.gz
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    curl: (22) The requested URL returned error: 404 Not Found
    Download failed.

  • curl -f --connect-timeout 20 --retry 5 --location https://mirror2.openwrt.org/sources/ipt2socks-1.1.3.tar.gz
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    curl: (22) The requested URL returned error: 404 Not Found
    Download failed.
    No more mirrors to try - giving up.
    make[3]: *** [Makefile:46: /workdir/openwrt/dl/ipt2socks-1.1.3.tar.gz] Error 2
    make[3]: Leaving directory '/workdir/openwrt/feeds/helloworld/ipt2socks'
    time: package/feeds/helloworld/ipt2socks/compile#0.19#0.06#1.06
    ERROR: package/feeds/helloworld/ipt2socks failed to build.
    make[2]: *** [package/Makefile:116: package/feeds/helloworld/ipt2socks/compile] Error 1
    make[2]: Leaving directory '/workdir/openwrt'
    make[1]: Leaving directory '/workdir/openwrt'
    make[1]: *** [package/Makefile:110: /workdir/openwrt/staging_dir/target-arm_cortex-a7+neon-vfpv4_musl_eabi/stamp/.package_compile] Error 2
    make: *** [/workdir/openwrt/include/toplevel.mk:231: world] Error 2
    Error: Process completed with exit code 2.

UDP报文经过代理后目标IP和端口都变成0

使用ss-tproxy做代理执行脚本结合ipt2Socks做sockt5透明代理,ss_tproxy模式是gfwlist,其他配置基本保持默认配置。发送TCP报文时能够正确解析获取目标IP和端口,但是在发送UDP报文后,目标IP和端口都变成0,麻烦大佬帮忙看看是什么原因导致的。
ss-tproxy.conf配置如下:

## mode
#mode='global'  # 全局:{ignlist}走直连,其他走代理
mode='gfwlist' # 黑名单:{gfwlist}走代理,其他走直连 (回国模式也是这个,后面有详细说明)
#mode='chnroute' # 大陆白名单:{gfwlist}走代理,{ignlist,chnlist,chnroute}走直连,其他走代理

## ipv4/6
ipv4='true'     # 是否对ipv4启用'透明代理': true启用 false不启用
ipv6='false'    # 是否对ipv6启用'透明代理': true启用 false不启用

## tproxy
tproxy='true'  # true:  TPROXY(tcp)   + TPROXY(udp) ## 纯 tproxy 模式 ##
                # false: REDIRECT(tcp) + TPROXY(udp) ## redirect 模式  ##
                #
                # 具体取决于'本机代理进程'的透明代理传入'协议'
                #
                # ss/ssr/trojan 通常为 redirect 模式
                # v2ray 两者都支持,具体取决于 v2ray 配置
                # ipt2socks 默认为纯 tproxy 模式,也可切换为 redirect 模式
                # ss-libev 3.3.5+ 支持纯 tproxy 模式,参数为"-T"、"tcp_tproxy": true
                # trojan 原生不支持 udp 透明代理,但可以配合 ipt2socks 来实现
                # trojan-go 只使用纯 tproxy 模式,支持 tcp 和 udp
                #
                # 其他代理软件请自行甄别测试,配置错误将无法正常透明代理

## tcponly
tcponly='false' # true:仅代理TCP流量 | false:代理TCP和UDP流量
                # 取决与'代理套件',有些代理/机场不支持UDP协议的代理
                # DNS查询默认走UDP,若代理不支持UDP,请将此选项设为true

## selfonly
selfonly='false' # true: 只代理ss-tproxy主机(本机)传出的流量
                 # false: 代理本机、内网机传出的流量(网关和dns指向ss-tproxy主机)
                 # 由于dns_remote必须走代理,且dns逻辑在本机进行,因此本机必须走代理
                 # 虽然可以只处理dns流量,其他流量不走代理,但感觉意义不大,还是简单点好

## proxy
#
# 本机代理进程相关,如透明代理端口,进程启动和停止命令(如果想自己控制进程启停,可以留空)
#
# ss-tproxy要求"代理进程"不参与ip分流,也不参与dns解析,专心让iptables过来的tcp/udp流量走代理即可
# 本机代理进程只会收到"纯ip"的tcp/udp流量,不会有dns相关的东西让你知晓,因为这些已被dns解析组件处理
# 因此ss-tproxy的设计原则是:各组件只负责自己的专业领域,无需知晓透明代理的全局面貌,做好自己的事就行
#
# 如果要切换代理/节点,请直接操作代理进程,而不是修改ss-tproxy.conf、重启ss-tproxy
# 因为这是一个重量级操作,除非tproxy模式、tcponly模式等涉及iptables规则的配置发生了更改
# 换句话说,ss-tproxy应该主要用于iptables管理,以及附带的dns方案,顺便帮你启动/关闭代理进程
#
proxy_procgroup='proxy'  # 本机代理进程的group(fsgid),所有代理进程都需要以此身份运行,用于流量放行
                         # 不允许填root或0,脚本会自动帮你创建group(如果填的是name),建议使用name
                         #
proxy_tcpport='60080'    # ss/ssr/v2ray/ipt2socks 等本机进程的 TCP 监听端口,该端口支持"透明代理"
proxy_udpport='60080'    # ss/ssr/v2ray/ipt2socks 等本机进程的 UDP 监听端口,该端口支持"透明代理"
                         # 代理进程只需监听"127.0.0.1"(v4环境)+"::1"(v6环境),不需要监听"全0地址"
                         #
proxy_startcmd=''        # 用于启动"本机代理进程(组)"的 shell 命令行,该命令行不应该执行过长时间
proxy_stopcmd=''         # 用于关闭"本机代理进程(组)"的 shell 命令行,该命令行不应该执行过长时间
                         # 如果想自己接管"本机代理进程"的启动/停止,可以在startcmd/stopcmd上留空
                         #
                         # 如果命令行比较长,建议封装为函数,然后在startcmd/stopcmd中调用这些函数
                         # shell函数可以定义在ss-tproxy.conf的任何位置,比如ss-tproxy.conf的末尾
                         #
                         # startcmd 中可调用 set_proxy_group 给可执行文件设置所属 group、setgid 权限位
                         # 例如:"set_proxy_group ss-redir",此后启动的 ss-redir 进程将会自动切换 group
                         #
                         # 如果 startcmd/stopcmd 留空,则需要手动控制"本机代理进程"的启动和停止
                         # 此时可使用 ss-tproxy 的 set-proxy-group 命令给可执行文件设置所属 group、setgid 权限位
                         # 例如:"sudo ss-tproxy set-proxy-group ipt2socks",然后再启动 ipt2socks 等本机代理进程

## dns
dns_custom='false'                    # true:使用自定义dns方案(高级用户,见下面的说明) | false:使用内置dns方案
                                      # 使用自定义dns方案时,所有dns相关的配置被忽略,内置的域名分流规则也会失效
                                      # 需要自己实现域名解析/分流;udp代理未启用时,如果想走代理,请记得走tcp协议
                                      #
dns_procgroup='proxy_dns'             # dns进程的group(fsgid),不能与proxy_procgroup相同,所有dns进程都需要以此身份运行
                                      # 不允许填root或0,脚本会自动帮你创建group(如果填的是name),建议使用name而不是gid
                                      #
dns_mainport='60053'                  # dns请求的逻辑入口(udp监听端口),脚本内部会将"所有"dns请求重定向至此udp端口
                                      # 监听地址必须能覆盖到"127.0.0.1"(v4环境)+"::1"(v6环境),用于接收本机dns请求
                                      # 如果要代理内网,则监听地址还需覆盖到相关网卡,为了简单,建议监听通配地址(全0)
                                      #
                                      # 下面的这些 dns_* 配置,只在使用"内置dns方案"时有效
                                      #
                                      # 直连DNS和远程DNS,控制的是"内置dns组件"的上游dns服务器参数
                                      # white和black,控制的是"ipset白名单/黑名单",即:直连还是代理
                                      # 放在这里是为了方便修改dns配置,不必修改ignlist.ext/gfwlist.ext
                                      #
                                      # white和black允许以下3类值,以dns_direct_white为例(其他同理)
                                      # - 'true'    # dns_direct的ip加入白名单,使其走直连
                                      # - 'false'   # dns_direct的ip不加入白名单,比如局域网ip
                                      # - '1.2.3.4' # 将1.2.3.4这个ip加入白名单,可填多个ip,空格隔开
                                      # > 注意,对于填写ip的情况,带6的选项请填ipv6地址,不带6的填ipv4地址
                                      #
dns_direct='223.5.5.5#53'             # 直连DNS(v4环境使用),必须指定端口,可使用本地/内网服务器
dns_direct6='240C::6666#53'           # 直连DNS(v6环境使用),必须指定端口,可使用本地/内网服务器
dns_direct_white='true'               # 将dns_direct的ip加入白名单(global/chnroute),使其走直连
dns_direct6_white='true'              # 将dns_direct6的ip加入白名单(global/chnroute),使其走直连
                                      #
dns_remote='8.8.8.8#53'               # 远程DNS(v4环境使用),必须指定端口,可使用本地/内网服务器
dns_remote6='2001:4860:4860::8888#53' # 远程DNS(v6环境使用),必须指定端口,可使用本地/内网服务器
dns_remote_black='true'               # 将dns_remote的ip加入黑名单(gfwlist/chnroute),使其走代理
dns_remote6_black='true'              # 将dns_remote6的ip加入黑名单(gfwlist/chnroute),使其走代理

## dnsmasq
# 使用自定义dns方案时,dnsmasq相关配置被忽略,dnsmasq不会启动
# 这里允许dnsmasq监听其他端口,是为了在它前面加入其他进程,优先处理dns
dnsmasq_bind_port=''                    # dnsmasq监听端口,留空表示端口同dns_mainport
dnsmasq_cache_size='4096'               # 最多缓存多少条,0表示禁用缓存,太大会影响性能
dnsmasq_cache_time_min='3600'           # 最短缓存多少秒,上限值为3600,0表示禁用此功能
dnsmasq_query_maxcnt='1024'             # dns查询最大并发数(dns-forward-max),默认150
dnsmasq_log_enable='false'              # 记录详细日志,除非进行调试,否则不建议启用
dnsmasq_log_file='/var/log/dnsmasq.log' # 日志文件,如果不想保存日志可以改为 /dev/null
dnsmasq_conf_dir=()                     # `--conf-dir` 选项的参数,可以填多个,空格隔开
dnsmasq_conf_file=()                    # `--conf-file` 选项的参数,可以填多个,空格隔开
dnsmasq_conf_string=()                  # 自定义配置,一个数组元素就是一行配置,空格隔开

## chinadns
# 使用自定义dns方案时,chinadns相关配置被忽略,chinadns-ng不会启动
chinadns_for_gfwlist='true'              # 用于mode=gfwlist,提升域名匹配性能
chinadns_bind_port='65353'               # 监听端口,若 65353 被占用,请注意更改
chinadns_chnlist_first='false'           # 优先加载 chnlist 域名 (默认优先 gfwlist)
chinadns_extra_options=''                # 其他附加的命令行选项(已有的选项就别再填了)
chinadns_verbose='false'                 # 记录详细日志,除非进行调试,否则不建议启用
chinadns_logfile='/var/log/chinadns.log' # 日志文件,如果不想保存日志可以改为 /dev/null

## dns2tcp
# 使用自定义dns方案时,dns2tcp相关配置被忽略,dns2tcp不会启动
# 如果udp代理效果一般,建议启用dns2tcp,个人测试tcp查询快于udp,可能是isp/gfw对udp进行了qos
# 这里提供禁用选项,主要是为了方便使用其他dns工具替代dns2tcp,比如dnsproxy,走doh,也是tcp协议
dns2tcp_enable='auto'                   # auto:tcponly时启用 | true:总是启用 | false:禁用
dns2tcp_bind_port='65454'               # 监听端口,若 65454 被占用,请注意更改
dns2tcp_extra_options=''                # 其他附加的命令行选项(已有的选项就别再填了)
dns2tcp_verbose='false'                 # 记录详细日志,除非进行调试,否则不建议启用
dns2tcp_logfile='/var/log/dns2tcp.log'  # 日志文件,如果不想保存日志可以改为 /dev/null

## ipts
ipts_if_lo='lo'                     # 环回接口的名称,在标准发行版中,通常为 lo,如果不是请修改
ipts_rt_tab='233'                   # iproute2 路由表名或表 ID,除非产生冲突,否则不建议改动该选项
ipts_rt_mark='0x2333'               # iproute2 策略路由的防火墙标记,除非产生冲突,否则不建议改动该选项
ipts_set_snat='false'               # 设置 ipv4 MASQUERADE(SNAT) 规则,selfonly=false 时有效,详见 README
ipts_set_snat6='false'              # 设置 ipv6 MASQUERADE(SNAT) 规则,selfonly=false 时有效,详见 README
ipts_reddns_onstop='223.5.5.5#53'   # stop后重定向内网主机发来的dns至指定dns,selfonly=false 时有效,详见 README
ipts_reddns6_onstop='240C::6666#53' # stop后重定向内网主机发来的dns至指定dns,selfonly=false 时有效,详见 README
ipts_proxy_dst_port=''              # 要代理哪些端口,留空表示全部,多个逗号隔开,冒号表示范围(含边界),详见 README
ipts_drop_quic='tcponly'            # 丢弃发往"黑名单"的QUIC: 留空:不丢弃 | tcponly:tcponly时丢弃 | always:总是丢弃

## opts
opts_ss_netstat='auto'      # auto/ss/netstat,用哪个端口检测工具: auto(自动选择,优先考虑ss) | ss | netstat

## url
# 用于更新gfwlist.txt,格式:`域名后缀`或`server=/域名后缀/dns_ip`(dnsmasq格式,只关心`域名后缀`字段)
url_gfwlist='https://raw.githubusercontent.com/pexcn/daily/gh-pages/gfwlist/gfwlist.txt'
# 用于更新chnlist.txt,格式:`域名后缀`或`server=/域名后缀/dns_ip`(dnsmasq格式,只关心`域名后缀`字段)
url_chnlist='https://raw.githubusercontent.com/felixonmars/dnsmasq-china-list/master/accelerated-domains.china.conf'
# 用于更新chnroute*.txt,目前只支持APNIC格式,如果想使用其他ip库,建议在当前文件重写ss-tproxy的相关函数
url_chnroute='https://ftp.apnic.net/stats/apnic/delegated-apnic-latest'

## 回国模式
#
# 在国外访问大陆网站时,可能会出现ip区域限制等问题,导致无法正常使用大陆网络服务
# 此时可以使用"回国模式",通过代理回到国内,摆脱ip区域限制等问题,原理与翻墙类似
#
# ss-tproxy支持回国模式,要切换到回国模式,请执行以下步骤:
#
# - 使用 gfwlist 分流模式,即 mode='gfwlist'
# - 互换 dns_direct* 和 dns_remote* 的配置内容
# - url_gfwlist 改为大陆域名列表的url,如 url_chnlist
# - 注释 gfwlist.ext 中的 Telegram 地址段 (这是给国内用的)
# - 执行 ss-tproxy update-gfwlist (将gfwlist.txt换成大陆域名)
#
# 以上步骤只需执行一次,之后就是正常使用 ss-tproxy start/stop 了

###################### 钩子函数 ######################

# 此函数在"启动逻辑之前"执行
pre_start() {
    # do something
    return
}

# 此函数在"启动逻辑之后"执行
post_start() {
    # do something
    return
}

# 此函数在"停止逻辑之前"执行
pre_stop() {
    # do something
    return
}

# 此函数在"停止逻辑之后"执行
post_stop() {
    # do something
    return
}

# 额外状态,如curl测试代理是否ok
extra_status() {
    # do something
    return
}

# 此函数在start的最后一步执行,获取运行时状态(如进程pid),保存到文件
extra_pid() {
    # 格式同shell变量赋值,注意变量命名,防止冲突/覆盖
    # pid文件是一个shell脚本,下次执行时会source加载它
    # echo "pid_foo=$pid_foo"
    # echo "pid_bar=$pid_bar"
    return
}

###################### 自定义dns方案 ######################

# 自定义dns方案时,你需要自己实现"域名分流"(ignlist/chnlist/gfwlist)
# 并且将相关域名解析出来的ip加入ipset黑/白名单,以便与iptables规则联动
# 黑名单: sstp_black、sstp_black6 | 白名单: sstp_white、sstp_white6

# 以下接口不要求全部实现,你可以根据需要自由组织代码,保证逻辑正确即可
# 如果不知道怎么实现,可以参考ss-tproxy脚本中有关dns的源码,依葫芦画瓢

# 初始化,脚本加载时调用
custom_dns_init() {
    # do something
    return
}

# 要加入白名单的ip,启动dns之前调用
custom_dns_whiteip() {
    # 格式同 ignlist.ext,一行一个
    # echo "-223.5.5.5"
    # echo "~240C::6666"
    return
}

# 要加入黑名单的ip,启动dns之前调用
custom_dns_blackip() {
    # 格式同 gfwlist.ext,一行一个
    # echo "-8.8.8.8"
    # echo "~2001:4860:4860::8888"
    return
}

# 启动dns进程,请务必以dns_procgroup身份运行
custom_dns_start() {
    # do something
    return
}

# 关闭dns进程,stop时调用
custom_dns_stop() {
    # do something
    return
}

# 打印运行状态,status时调用
custom_dns_status() {
    # do something
    return
}

# 清空dns缓存,flush-dnscache时调用
custom_dns_flush() {
    # do something
    return
}

# 此函数在start的最后一步执行,获取运行时状态,同extra_pid
custom_dns_pid() {
    # 格式同shell变量赋值,注意变量命名,防止冲突/覆盖
    # pid文件是一个shell脚本,下次执行时会source加载它
    # echo "pid_foo=$pid_foo"
    # echo "pid_bar=$pid_bar"
    return
}

# 除了上述钩子函数,你还可以定义其他shell函数和变量
# 你也可以在当前文件使用ss-tproxy中已定义的函数和变量
#
# 若定义的函数与ss-tproxy中的同名,则本文件定义的函数覆盖原函数
# 使用自定义dns方案时,此特性可帮助你快速与原脚本融合(见脚本源码)
#
# ss-tproxy.conf是一个shell脚本,可以使用source来加载其他shell脚本
# ss-tproxy.conf被执行时,可以访问ss-tproxy传来的命令行参数(位置参数)

v1.1.0 版速度(稳定性)欠佳

编译了 mips 版搭配 ss_tproxy 在路由器上使用,发现速度稳定性欠佳,表现为有时网页无法打开,经一次或多次刷新后可正常打开,且网页加载速度偏慢,控制变量对比使用 glider 无上述情况,确认问题出在 ipt2socks,目前使用 glider (tcp) + ipt2socks (udp) 一切良好。

ipt2socks 运行参数:ipt2socks -R4 -s 192.168.1.2 -p 1080 -l 1091
glider 参数作参考:glider -listen redir://:1092 -forward socks5://192.168.1.2:1080

运行一段时间后 进程丢失

有点奇怪,跑了几次都出现,看内存也不高,也不是很懂 alpine 怎么调试,看不到 killoom 的记录。换v2ray dokodemo-door 不会出现。

运行环境

docker -> alpine

Linux version 5.0.2-aml-s905 (root@vbox) (gcc version 7.3.1 20180425 [linaro-7.3-2018.05 revision d29120a424ecfbc167ef90065c0eeb7f91977701] (Linaro GCC 7.3-2018.05)) #5.77 SMP PREEMPT Mon Apr 1 17:41:33 MSK 2019

运行参数

ipt2socks -s 127.0.0.1 -p 1080

socks 服务

trojan 1.14.0

最后一段日志

2020-01-18 12:42:08 INF: [tcp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:08 INF: [tcp_socks5_tcp_connect_cb] connected to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:08 INF: [tcp_socks5_tcp_connect_cb] send authreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:08 INF: [tcp_socks5_auth_read_cb] send proxyreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:08 INF: [tcp_socks5_resp_read_cb] connected to target host, start forwarding
2020-01-18 12:42:08 INF: [tcp_stream_read_cb] tcp connection has been closed in both directions
2020-01-18 12:42:09 INF: [udp_socket_listen_cb] recv 40 bytes data from 192.168.5.254#23431
2020-01-18 12:42:09 INF: [udp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [udp_socks5_tcp_connect_cb] connected to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [udp_socks5_tcp_connect_cb] send authreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [udp_socks5_auth_read_cb] send proxyreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [udp_socks5_resp_read_cb] udp tunnel is open, try to send packet via socks5
2020-01-18 12:42:09 INF: [udp_socks5_resp_read_cb] send 40 bytes data to 8.8.8.8#53 via socks5
2020-01-18 12:42:09 INF: [udp_client_recv_cb] recv 56 bytes data from 8.8.8.8#53 via socks5
2020-01-18 12:42:09 INF: [udp_client_recv_cb] send 56 bytes data to 192.168.5.254#23431 via tproxy
2020-01-18 12:42:09 INF: [tcp_socket_listen_cb] accept new tcp connection: 192.168.5.191#39917
2020-01-18 12:42:09 INF: [tcp_socket_listen_cb] original destination addr: 216.58.199.106#443
2020-01-18 12:42:09 INF: [tcp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [tcp_socks5_tcp_connect_cb] connected to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [tcp_socks5_tcp_connect_cb] send authreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [tcp_socks5_auth_read_cb] send proxyreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:09 INF: [tcp_socks5_resp_read_cb] connected to target host, start forwarding
2020-01-18 12:42:22 INF: [tcp_stream_read_cb] tcp connection has been closed in both directions
2020-01-18 12:42:23 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:25 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:25 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:25 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:25 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:27 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:30 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:30 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:31 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:31 INF: [udp_svrentry_timer_cb] udp server idle timeout, release related resources
2020-01-18 12:42:32 INF: [udp_svrentry_timer_cb] udp server idle timeout, release related resources
2020-01-18 12:42:32 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:36 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:37 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:37 INF: [udp_svrentry_timer_cb] udp server idle timeout, release related resources
2020-01-18 12:42:39 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:39 INF: [tcp_socket_listen_cb] accept new tcp connection: 192.168.5.191#41563
2020-01-18 12:42:39 INF: [tcp_socket_listen_cb] original destination addr: 192.229.237.96#443
2020-01-18 12:42:39 INF: [tcp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_tcp_connect_cb] connected to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_tcp_connect_cb] send authreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_auth_read_cb] send proxyreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_resp_read_cb] connected to target host, start forwarding
2020-01-18 12:42:39 INF: [tcp_socket_listen_cb] accept new tcp connection: 192.168.5.191#41565
2020-01-18 12:42:39 INF: [tcp_socket_listen_cb] original destination addr: 192.229.237.96#443
2020-01-18 12:42:39 INF: [tcp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_tcp_connect_cb] connected to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_tcp_connect_cb] send authreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_auth_read_cb] send proxyreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:39 INF: [tcp_socks5_resp_read_cb] connected to target host, start forwarding
2020-01-18 12:42:41 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:41 INF: [udp_svrentry_timer_cb] udp server idle timeout, release related resources
2020-01-18 12:42:46 INF: [udp_cltentry_timer_cb] udp client idle timeout, release related resources
2020-01-18 12:42:53 INF: [udp_socket_listen_cb] recv 22 bytes data from 192.168.5.254#61082
2020-01-18 12:42:53 INF: [udp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:53 INF: [udp_socks5_tcp_connect_cb] connected to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:53 INF: [udp_socks5_tcp_connect_cb] send authreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:53 INF: [udp_socks5_auth_read_cb] send proxyreq to socks5 server: 127.0.0.1#1080
2020-01-18 12:42:53 INF: [udp_socks5_resp_read_cb] udp tunnel is open, try to send packet via socks5
2020-01-18 12:42:53 INF: [udp_socks5_resp_read_cb] send 22 bytes data to 8.8.8.8#53 via socks5
2020-01-18 12:42:54 INF: [udp_client_recv_cb] recv 86 bytes data from 8.8.8.8#53 via socks5
bash-5.0#

'-b' 参数不起作用

运行环境:路由器,跑的 Tomato:

Linux netgear 2.6.36.4brcmarm #7 SMP PREEMPT Wed Aug 11 16:39:01 CEST 2021 armv7l GNU/Linux

版本:

ipt2socks v1.1.3

命令行:

ipt2socks -s 192.168.2.20 -p 7575 -l 8095 -4 -R 或者 ipt2socks -s 192.168.2.20 -p 7575 -b 192.168.2.1 -l 8095 -4 -R

无论用不用 -b 指定监听地址,ipt2socks 总会监听在 0.0.0.0 上:

netstat -lnp | grep ipt2
tcp        0      0 0.0.0.0:8095            0.0.0.0:*               LISTEN      10924/ipt2socks
udp        0      0 0.0.0.0:8095            0.0.0.0:*                           10924/ipt2socks

如何编译Openwrt用的ipt2socks

我在Ubuntu下面照着README的静态链接 libuv 编译了ipt2socks。
在Ubuntu上测试可以跑。
但拿到openwrt 19.07.1 x86_x64 下面执行出现下面这个错误。

root@OpenWrt:/usr/sbin# ./ipt2socks
/bin/ash: ./ipt2socks: not found

请问如何能在Ubuntu下面编译openwrt 19.07.1 x86_x64 用的版本。
谢谢

Is "really" transparent proxy?

redsocks说明自己并不是真实的透明代理:redsocks acts at TCP level, so three-way handshake is completed and redsocks accepts connection before connection through proxy (and to proxy) is established。我想问下这个项目也是这样的吗?

Can't forward UDP to a sub-net with tproxy

I want to forward UDP from a sub-net to ipt2socks via its trproxy port.

I added this rules:

ip rule add fwmark 1088 table 100
ip route add local default dev eth2 table 100

iptables -t mangle -A OUTPUT -o eth2 -p udp -j MARK --set-mark 1088
iptables -t mangle -A PREROUTING -i eth2 -p udp -j TPROXY --on-ip 10.0.0.1 --on-port 10000 --tproxy-mark 1088

sysctl -w net.ipv4.conf.eth2.forwarding=1

/\ "eth2" has 10.0.0.1/24 as IP and the peer has 10.0.0.2/24

This is the ipt2socks log:

root@localhost:/home/user# ipt2socks -v --server-addr 127.0.0.1 --server-port 9000 --listen-addr4 10.0.0.1 --listen-port 10000 --udp-only
2021-10-08 22:07:11 INF: [main] server address: 127.0.0.1#9000
2021-10-08 22:07:11 INF: [main] listen address: 10.0.0.1#10000
2021-10-08 22:07:11 INF: [main] listen address: ::1#10000
2021-10-08 22:07:11 INF: [main] udp cache maximum size: 256
2021-10-08 22:07:11 INF: [main] udp socket idle timeout: 60
2021-10-08 22:07:11 INF: [main] number of worker threads: 1
2021-10-08 22:07:11 INF: [main] enable udp transparent proxy
2021-10-08 22:07:11 INF: [main] verbose mode (affect performance)
2021-10-08 22:07:18 INF: [udp_tproxy_recvmsg_cb] recv from 10.0.0.2#42208, nrecv:45
2021-10-08 22:07:18 INF: [udp_tproxy_recvmsg_cb] try to connect to 127.0.0.1#9000 ...
2021-10-08 22:07:18 INF: [udp_socks5_connect_cb] connect to 127.0.0.1#9000 succeeded
2021-10-08 22:07:18 INF: [udp_socks5_send_authreq_cb] send to 127.0.0.1#9000, nsend:3
2021-10-08 22:07:18 INF: [udp_socks5_recv_authresp_cb] recv from 127.0.0.1#9000, nrecv:2
2021-10-08 22:07:18 INF: [udp_socks5_recv_authresp_cb] send to 127.0.0.1#9000, nsend:10
2021-10-08 22:07:18 INF: [udp_socks5_recv_proxyresp_cb] recv from 127.0.0.1#9000, nrecv:10
2021-10-08 22:07:18 INF: [udp_socks5_recv_proxyresp_cb] send to 1.1.1.1#53, nsend:55
2021-10-08 22:07:18 INF: [udp_socks5_recv_udpmessage_cb] recv from 1.1.1.1#53, nrecv:59
2021-10-08 22:07:18 INF: [udp_socks5_recv_udpmessage_cb] send to 10.0.0.2#42208, nsend:49
2021-10-08 22:07:23 INF: [udp_tproxy_recvmsg_cb] recv from 10.0.0.2#42208, nrecv:45
2021-10-08 22:07:23 INF: [udp_tproxy_recvmsg_cb] send to 1.1.1.1#53, nsend:55
2021-10-08 22:07:23 INF: [udp_socks5_recv_udpmessage_cb] recv from 1.1.1.1#53, nrecv:59
2021-10-08 22:07:23 INF: [udp_socks5_recv_udpmessage_cb] send to 10.0.0.2#42208, nsend:49
2021-10-08 22:07:28 INF: [udp_tproxy_recvmsg_cb] recv from 10.0.0.2#42208, nrecv:45
2021-10-08 22:07:28 INF: [udp_tproxy_recvmsg_cb] send to 1.1.1.1#53, nsend:55
2021-10-08 22:07:28 INF: [udp_socks5_recv_udpmessage_cb] recv from 1.1.1.1#53, nrecv:59
2021-10-08 22:07:28 INF: [udp_socks5_recv_udpmessage_cb] send to 10.0.0.2#42208, nsend:49

This is the socks5 server log:

user@localhost:~$ socks -l -p9000
211009020629.225 9000 00000 - 0.0.0.0:9000 0.0.0.0:0 0 0 0 Accepting connections [23981/4196337472]
211009020706.622 9000 00000 - 127.0.0.1:39238 1.1.1.1:53 135 147 0 UDPMAP 0.0.0.0:0

Segmentation fault

操作系统:CentOS 7.6
Tproxy规则:-p udp -j TPROXY --on-port 12345 --on-ip 0.0.0.0 --tproxy-mark 0x1/0x1
socks5代理:gost -L socks5://127.0.0.1:10801
启动方式:sudo -u nobody /usr/bin/ipt2socks -U -4 -s 127.0.0.1 -p 10801 -b 0.0.0.0 -l 12345 -o 5 -G -v
错误提示:
......
2019-11-28 17:30:57 INF: [udp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#10801
Segmentation fault

为什么ipt2socks需要udp 443的端口监听呢?

Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] server address: 127.0.0.1#1080
Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] listen address: 127.0.0.1#60080
Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] listen address: ::1#60080
Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] udp cache maximum size: 256
Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] udp socket idle timeout: 60
Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] number of worker threads: 1
Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] enable tcp transparent proxy
Jan 31 17:25:16 archwork ipt2socks[464762]: 2024-01-31 17:25:16 INF: [main] enable udp transparent proxy
Jan 31 17:25:31 archwork ipt2socks[464762]: 2024-01-31 17:25:31 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:25:31 archwork ipt2socks[464762]: 2024-01-31 17:25:31 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:25:31 archwork ipt2socks[464762]: 2024-01-31 17:25:31 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:25:31 archwork ipt2socks[464762]: 2024-01-31 17:25:31 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:25:31 archwork ipt2socks[464762]: 2024-01-31 17:25:31 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:25:32 archwork ipt2socks[464762]: 2024-01-31 17:25:32 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:25:33 archwork ipt2socks[464762]: 2024-01-31 17:25:33 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:26:21 archwork ipt2socks[464762]: 2024-01-31 17:26:21 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:26:21 archwork ipt2socks[464762]: 2024-01-31 17:26:21 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:26:21 archwork ipt2socks[464762]: 2024-01-31 17:26:21 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:26:22 archwork ipt2socks[464762]: 2024-01-31 17:26:22 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:26:22 archwork ipt2socks[464762]: 2024-01-31 17:26:22 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use
Jan 31 17:26:22 archwork ipt2socks[464762]: 2024-01-31 17:26:22 ERR: [udp_socks5_recv_udpmessage_cb] bind tproxy reply address: Address already in use

一直报failed to read data from socks5 server的错,网络不通

[udp_socks5_auth_read_cb] failed to read data from socks5 server: (4095) end of file
2019-11-12 08:02:20 ERR: [udp_socks5_auth_read_cb] failed to read data from socks5 server: (4095) end of file
2019-11-12 08:02:21 ERR: [udp_socks5_auth_read_cb] failed to read data from socks5 server: (4095) end of file
2019-11-12 08:02:23 ERR: [udp_socks5_auth_read_cb] failed to read data from socks5 server: (4095) end of file
2019-11-12 08:02:26 ERR: [udp_socks5_auth_read_cb] failed to read data from socks5 server: (4095) end of file
2019-11-12 08:02:30 ERR: [udp_socks5_auth_read_cb] failed to read data from socks5 server: (4095) end of file


配置环境:
网关上已运行v2ray,直接连v2ray可以通。之后想通过配置iptables把网段内所有流量用tproxy倒到v2ray上的任意门,由于v2ray udp有问题,所以想先把流量转成socks再进入v2ray任意门监听的端口。配置后出现上面的错误,网络不通。相关配置如下:
iptables:

代理局域网设备

iptables -t mangle -N V2RAY
iptables -t mangle -A V2RAY -d 127.0.0.1/32 -j RETURN
iptables -t mangle -A V2RAY -d 224.0.0.0/4 -j RETURN
iptables -t mangle -A V2RAY -d 255.255.255.255/32 -j RETURN
iptables -t mangle -A V2RAY -d 10.1.0.0/16 -p tcp -j RETURN
iptables -t mangle -A V2RAY -p udp -j TPROXY --on-port 60080 --tproxy-mark 1
iptables -t mangle -A V2RAY -p tcp -j TPROXY --on-port 60080 --tproxy-mark 1
iptables -t mangle -A PREROUTING -j V2RAY

代理网关本机

iptables -t mangle -N V2RAY_MASK
iptables -t mangle -A V2RAY_MASK -d 224.0.0.0/4 -j RETURN
iptables -t mangle -A V2RAY_MASK -d 255.255.255.255/32 -j RETURN
iptables -t mangle -A V2RAY_MASK -d 10.1.0.0/16 -p tcp -j RETURN
iptables -t mangle -A V2RAY_MASK -j RETURN -m mark --mark 0xff
iptables -t mangle -A V2RAY_MASK -p udp -j MARK --set-mark 1
iptables -t mangle -A V2RAY_MASK -p tcp -j MARK --set-mark 1
iptables -t mangle -A OUTPUT -j V2RAY_MASK


运行ipt2socks的参数:
ipt2socks -s 127.0.0.1 -p 12345


v2ray的配置:
{
"log":
{
"loglevel": "warning"
},
"inbounds":
[{
"tag":"transparent",
"port": 12345,
"protocol": "dokodemo-door",
"settings":
{
"network": "tcp,udp",
"followRedirect": true
},
"sniffing":
{
"enabled": true,
"destOverride":
[
"http",
"tls"
]
},
"streamSettings":
{
"sockopt":
{
"tproxy": "tproxy" // 透明代理使用 TPROXY 方式
}
}
},
{
"port": 1080,
"listen": "0.0.0.0",
"protocol": "socks",
"sniffing":
{
"enabled": true,
"destOverride": ["http", "tls"]
},
"settings":
{
"auth": "noauth"
}
}],
"outbounds":
[{
"protocol": "vmess",
"settings":
{
"vnext":
[{
***********************
}]
},
"streamSettings":
{
"network": "ws",
"security": "tls",
"wsSettings":
{
"path": "/v2ray"
},
"sockopt":
{
"mark": 255
}
}
},
{
"protocol": "freedom",
"tag": "direct",
"settings": {},
"streamSettings":
{
"sockopt":
{
"mark": 255
}
}
},
{
"protocol": "blackhole",
"settings": {},
"tag": "adblock"
}],
"routing":
{
"domainStrategy": "IPOnDemand",
"rules":
[{
"type": "field",
"domain":
[

    ],
    "outboundTag": "direct"
},
{
        "type": "field",
        "ip": 
        [
	"geoip:cn",
	"geoip:private"
    ],
    "outboundTag": "direct"
},
{
        "type": "field",
    "domain": 
    [
	"geosite.dat:cn"
    ],
	"outboundTag": "direct"
},
{
        // Blocks major ads.
        "type": "field",
        "domain": 
        [
        "geosite:category-ads"
    ],
            "outboundTag": "blocked"
    }]
}

}

v2ray的log:
2019/11/12 08:42:42 [Warning] v2ray.com/core/transport/internet/tcp: failed to accepted raw connections > accept tcp [::]:12345: accept4: too many open files
2019/11/12 08:42:43 [Warning] v2ray.com/core/transport/internet/tcp: failed to accepted raw connections > accept tcp [::]:12345: accept4: too many open files
2019/11/12 08:42:43 [Warning] v2ray.com/core/transport/internet/tcp: failed to accepted raw connections > accept tcp [::]:12345: accept4: too many open files
2019/11/12 08:42:44 [Warning] v2ray.com/core/transport/internet/tcp: failed to accepted raw connections > accept tcp [::]:12345: accept4: too many open files
2019/11/12 08:42:44 [Warning] v2ray.com/core/transport/internet/tcp: failed to accepted raw

v2ray一直出现这种warning,整个网络不通。我对ipt2socks理解是数据包从其监听的60080端口进入,变成tcp socks包后再输出到启动参数配置的12345端口。是不是我理解错了?12345端口是v2ray的任意门端口,怎么会报failed to accepted呢?

好像udp只能转发到本机

ea6200 运行ipt2socks,转发到局域网内跑v2ray的debian。

ea6200
192.168.1.1
Linux unknown 2.6.36.4brcmarm #17 PREEMPT Thu May 7 22:42:27 CEST 2020 armv7l Tomato

debian
192.168.1.149
Linux Debian 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07) x86_64 GNU/Linux

不能转发udp到192.168.1.149,tcp是可以的。
我测试了redsocks和redsocks2,也是一样的结果。

运行命令:
/jffs/ipt2socks -b 192.168.1.1 -l 12345 -s 192.168.1.149 -p 1081 -R -4&

在局域网内win10电脑运行 NAT 类型测试工具 后:

错误提示:
2020-07-15 13:44:11 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:11 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:12 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:14 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:16 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:18 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:20 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:22 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:24 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:26 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:28 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused
2020-07-15 13:44:30 ERR: [udp_socks5_recv_udpmessage_cb] recv from socks5 server: Connection refused

查看netstat -nap|grep ipt2socks
udp 0 0 0.0.0.0:12345 0.0.0.0:* 16749/ipt2socks
udp 0 0 127.0.0.1:24157 127.0.0.1:1081 ESTABLISHED 16749/ipt2socks
udp 0 0 127.0.0.1:6012 127.0.0.1:1081 ESTABLISHED 16749/ipt2socks
udp 0 0 127.0.0.1:61083 127.0.0.1:1081 ESTABLISHED 16749/ipt2socks
udp 0 0 127.0.0.1:9646 127.0.0.1:1081 ESTABLISHED 16749/ipt2socks
udp 0 0 127.0.0.1:39615 127.0.0.1:1081 ESTABLISHED 16749/ipt2socks

iptables规则:
iptables -t mangle -N FUCKGFW
ip route add local 0/0 dev lo table 100
ip rule add fwmark 0x2333/0x2333 lookup 100
iptables -t mangle -A FUCKGFW -d 0.0.0.0/8 -j RETURN
iptables -t mangle -A FUCKGFW -d 10.0.0.0/8 -j RETURN
iptables -t mangle -A FUCKGFW -d 127.0.0.0/8 -j RETURN
iptables -t mangle -A FUCKGFW -d 169.254.0.0/16 -j RETURN
iptables -t mangle -A FUCKGFW -d 172.16.0.0/12 -j RETURN
iptables -t mangle -A FUCKGFW -d 192.168.0.0/16 -j RETURN
iptables -t mangle -A FUCKGFW -d 224.0.0.0/4 -j RETURN
iptables -t mangle -A FUCKGFW -d 240.0.0.0/4 -j RETURN
iptables -t mangle -A FUCKGFW -s 192.168.1.149 -j RETURN
iptables -t mangle -A FUCKGFW -p udp -j TPROXY --tproxy-mark 0x2333/0x2333 --on-port 12345
iptables -t mangle -A PREROUTING -p udp -s 192.168/16 -j FUCKGFW

与其他软件协作不好,老是奔溃

比如 naiveproxy的socks5, (https://github.com/klzgrad/naiveproxy

clash,dns2socks这些就能与naiveproxy凑合着用

贴一下错误信息

2021-10-27 14:46:40 INF: [main] server address: 127.0.0.1#1080
2021-10-27 14:46:40 INF: [main] listen address: 0.0.0.0#12345
2021-10-27 14:46:40 INF: [main] listen address: ::#12345
2021-10-27 14:46:40 INF: [main] udp cache maximum size: 256
2021-10-27 14:46:40 INF: [main] udp socket idle timeout: 60
2021-10-27 14:46:40 INF: [main] number of worker threads: 1
2021-10-27 14:46:40 INF: [main] enable tcp transparent proxy
2021-10-27 14:46:40 INF: [main] enable udp transparent proxy
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:49:30 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:50:56 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
2021-10-27 14:50:56 ERR: [tcp_stream_payload_forward_cb] recv from client stream: Connection reset by peer
./ipt2socks: libev/ev_poll.c: 113: poll_poll: Assertion `("libev: poll returned illegal result, broken BSD kernel?", p < ((loop)->polls) + ((loop)->pollcnt))' failed.
Aborted

本来一直用的好好,今天更新就用不了了,一直提示端口被占用

[run_event_loop] failed to listen address for tcp6 socket: (98) address already in use

master分支使用有问题,得checkout回v1.0.2编译才行

发现加上-R参数就出问题

ipt2socks -s 127.0.0.1 -p 1080 -R

root@xxxxx:/# ipt2socks -s 127.0.0.1 -p 1080 -R
2020-03-23 11:24:57 INF: [main] server address: 127.0.0.1#1080
2020-03-23 11:24:57 INF: [main] listen address: 0.0.0.0#60080
2020-03-23 11:24:57 INF: [main] listen address: ::#60080
2020-03-23 11:24:57 INF: [main] number of worker threads: 1
2020-03-23 11:24:57 INF: [main] udp socket idle timeout: 300
2020-03-23 11:24:57 INF: [main] udp cache maximum size: 256
2020-03-23 11:24:57 INF: [main] tcp socket buffer size: 8192
2020-03-23 11:24:57 INF: [main] enable tcp transparent proxy
2020-03-23 11:24:57 INF: [main] enable udp transparent proxy
2020-03-23 11:24:57 INF: [main] use redirect instead of tproxy
2020-03-23 11:24:57 ERR: [run_event_loop] failed to listen address for tcp6 socket: (98) address already in use

[Question] How can I forward traffic across network namespaces with ipt2socks?

I'm trying to forward traffic across namespaces, basically I set up a transparent proxy inside a network namespace and forward the traffic to another one.

I create namespaces and set up all the rest with:

ip netns add nsx
ip netns add nsy
ip link add vethx type veth peer name peerx netns nsx
ip link set vethx up
ip address add 10.0.0.1/24 dev vethx
ip netns exec nsx ip link set peerx up
ip netns exec nsx ip address add 10.0.0.2/24 dev peerx
ip netns exec nsx ip link add vethy type veth peer name peery netns nsy
ip netns exec nsx ip link set vethy up
ip netns exec nsx ip address add 10.0.1.1/24 dev vethy
ip netns exec nsx sysctl -w net.ipv4.conf.peerx.forwarding=1
ip netns exec nsx sysctl -w net.ipv4.conf.vethy.forwarding=1
ip netns exec nsx sysctl -w net.ipv4.ip_forward=1
ip netns exec nsy ip link set peery up
ip netns exec nsy ip address add 10.0.1.2/24 dev peery
ip netns exec nsy ip route add default via 10.0.1.1 dev peery

Rules are added in the network namespace "nsx":

ip netns exec nsx ip rule add fwmark 1088 table 100
ip netns exec nsx ip route add local default dev vethy table 100

Iptables rule is added:

ip netns exec nsx iptables -t mangle -A PREROUTING -i vethy -p tcp -j TPROXY -s 10.0.1.2 --on-ip 10.0.0.1 --on-port 19040 --tproxy-mark 1088

But when I try to connect I get this:

root@localhost:/home/user# dig @1.1.1.1 duckduckgo.com
; <<>> DiG 9.18.1-1-Debian <<>> @1.1.1.1 duckduckgo.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

============================

So, what can be done to make the connection be made successfully?

failed to bind address for udp4 socket: (98) address already in use

环境信息

主要是为了代理路由器本身chinadns的upstream dns请求
路由器IP: 172.16.10.40
Chinadns-ng向8.8.8.8发出的请求经过TPROXY转发到ipt2socks
从Log发现,chinadns发出了UDP请求,TPROXY成功redirect到ipt2socks
ipt2socks收到请求,但出现如题的错误

Log

2020-03-02 17:19:21 INF: [main] server address: 127.0.0.1#8087
2020-03-02 17:19:21 INF: [main] listen address: 0.0.0.0#1051
2020-03-02 17:19:21 INF: [main] number of worker threads: 1
2020-03-02 17:19:21 INF: [main] udp socket idle timeout: 300
2020-03-02 17:19:21 INF: [main] udp cache maximum size: 256
2020-03-02 17:19:21 INF: [main] tcp socket buffer size: 8192
2020-03-02 17:19:21 INF: [main] enable udp transparent proxy
2020-03-02 17:19:21 INF: [main] verbose mode (affect performance)
2020-03-02 17:19:36 INF: [udp_socket_listen_cb] recv 37 bytes data from 172.16.10.40#42285
2020-03-02 17:19:36 INF: [udp_socket_listen_cb] try to connect to socks5 server: 127.0.0.1#8087
2020-03-02 17:19:36 INF: [udp_socks5_tcp_connect_cb] connected to socks5 server: 127.0.0.1#8087
2020-03-02 17:19:36 INF: [udp_socks5_tcp_connect_cb] send authreq to socks5 server: 127.0.0.1#8087
2020-03-02 17:19:36 INF: [udp_socks5_auth_read_cb] send proxyreq to socks5 server: 127.0.0.1#8087
2020-03-02 17:19:36 INF: [udp_socks5_resp_read_cb] udp tunnel is open, try to send packet via socks5
2020-03-02 17:19:36 INF: [udp_socks5_resp_read_cb] send 37 bytes data to 8.8.8.8#53 via socks5
2020-03-02 17:19:37 INF: [udp_client_recv_cb] recv 112 bytes data from 8.8.8.8#53 via socks5
2020-03-02 17:19:37 ERR: [udp_client_recv_cb] failed to bind address for udp4 socket: (98) address already in use

iptables

ip rule add fwmark 1 table 100
ip route add local 0.0.0.0/0 dev lo table 100
iptables -t mangle -A PREROUTING -p udp -d 8.8.8.8 --dport 53 -j TPROXY --on-port 1051 --tproxy-mark 0x01/0x01
iptables -t mangle -A OUTPUT -p udp -d 8.8.8.8 --dport 53 -j MARK --set-mark 1

TCP可以正常連,但UDP無回應。

區網的電腦,TCP可以正常連出去。但UDP發出去都沒有回應。無法nslookup。

平台是OpenWrt X86-64
socks5 用的是NaiveProxy
ipt2socks設定跟iptable設定如下。
ipt2socks跟iptable設定看起來都很單純。還是問題出在OpenWrt的TPROXY?

config ipt2socks
option enable '1'
option server_addr '127.0.0.1'
option server_port '1080'
option auth_username ''
option auth_password ''
option listen_addr4 '0.0.0.0'
option listen_addr6 '::1'
option listen_port '1234'
option thread_nums '1'
option nofile_limit '65535'
option udp_timeout '300'
option cache_size '256'
option buffer_size '8192'
option graceful '0'
option redirect '0'
option tcp_only '0'
option udp_only '0'
option ipv4_only '0'
option ipv6_only '0'

设置策略路由
ip rule add fwmark 1 table 100
ip route add local 0.0.0.0/0 dev lo table 100

代理局域网设备
iptables -t mangle -N NAIVE
iptables -t mangle -A NAIVE -d 127.0.0.1/32 -j RETURN
iptables -t mangle -A NAIVE -d 224.0.0.0/4 -j RETURN
iptables -t mangle -A NAIVE -d 255.255.255.255/32 -j RETURN
iptables -t mangle -A NAIVE -d 192.168.0.0/16 -j RETURN
iptables -t mangle -A NAIVE -p udp -j TPROXY --on-port 1234 --tproxy-mark 1
iptables -t mangle -A NAIVE -p tcp -j TPROXY --on-port 1234 --tproxy-mark 1
iptables -t mangle -A PREROUTING -j NAIVE

支持配置文件

虽然纯命令行也能用,但是如果支持配置文件就可以直接改配置文件而不用改service文件,更直观也更方便

openwrt 下的编译警告

# ......
In file included from netutils.c:3:0:
netutils.c: In function 'set_nofile_limit':
logutils.h:21:16: warning: format '%lu' expects argument of type 'long unsigned int', but argument 8 has type 'rlim_t {aka long long unsigned int}' [-Wformat=]
         printf("\e[1;35m%04d-%02d-%02d %02d:%02d:%02d ERR:\e[0m " fmt "\n",  \
                ^
logutils.h:21:16: note: in definition of macro 'LOGERR'
         printf("\e[1;35m%04d-%02d-%02d %02d:%02d:%02d ERR:\e[0m " fmt "\n",  \
                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
netutils.c:290:56: note: format string is defined here
         LOGERR("[set_nofile_limit] setrlimit(nofile, %lu): (%d) %s", nofile, errno, errstring(errno));
                                                      ~~^
                                                      %llu
netutils.c: In function 'run_as_user':
netutils.c:310:9: warning: implicit declaration of function 'initgroups'; did you mean 'getgroups'? [-Wimplicit-function-declaration]
     if (initgroups(userinfo->pw_name, userinfo->pw_gid) < 0) {
         ^~~~~~~~~~
         getgroups
# ......

#1 是否跟这个警告有关?

请求增加fake dns功能

查询真实ip是透明代理要解决的一个重要问题,想必不违背kiss原则。这项工作完全可以在本地完成,只需内置一个简易的DNS服务器,对每个域名返回唯一的假IP,并维护假IP到域名的映射。而走代理查询需要另外准备通道如ss-tunnel,延迟较高。

常见FQ软件中,clash支持fake dns。v2ray支持tls域名探测,但经常有抓不到域名的情况,而且不支持其他类型的流量如udp。v2ray也有人提交了这项功能的PR,暂未被接受。

How can I forward ipt2socks TPROXY port to a subnet?

With redirect in TCP, is easy:

iptables -t nat -A PREROUTING -i subnet0 -p tcp --syn -j DNAT --to-destination 127.0.0.1:60080

But how can I do the same redirection with TPROXY? How can I do the same with UDP?

High CPU usage under few TCP/UDP connections

Version: 5128beb
libuv:1.32.0
Linux Kernel: 4.14

I was trying 5128beb on an OpenWRT router. It still crashes sometimes.

If it runs constantly for 20 minutes, the CPU usage will grow to 50%. After manually restart it, the CPU usage will resume to 4%. The memory usage wasn't significantly different, though.

@zfl9 Do you know what could be the reason?

在openwrt 19.07.3 sdk下编译错误

在openwrt 19.07.3 sdk下编译错误

make[3]: Entering directory '/data00/home/j3l11234/openwrt/openwrt/package/custom/openwrt-ipt2socks'
rm -f /data00/home/j3l11234/openwrt/openwrt/build_dir/target-mipsel_24kc_musl/ipt2socks/ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7/.built
touch /data00/home/j3l11234/openwrt/openwrt/build_dir/target-mipsel_24kc_musl/ipt2socks/ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7/.built_check
CFLAGS="-Os -pipe -mno-branch-likely -mips32r2 -mtune=24kc -fno-caller-saves -fno-plt -fhonour-copts -Wno-error=unused-but-set-variable -Wno-error=unused-result -msoft-float -iremap/data00/home/j3l11234/openwrt/openwrt/build_dir/target-mipsel_24kc_musl/ipt2socks/ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7:ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7 -Wformat -Werror=format-security -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro  -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/target-mipsel_24kc_musl/usr/include -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/target-mipsel_24kc_musl/include -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/usr/include -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/include/fortify -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/include " CXXFLAGS="-Os -pipe -mno-branch-likely -mips32r2 -mtune=24kc -fno-caller-saves -fno-plt -fhonour-copts -Wno-error=unused-but-set-variable -Wno-error=unused-result -msoft-float -iremap/data00/home/j3l11234/openwrt/openwrt/build_dir/target-mipsel_24kc_musl/ipt2socks/ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7:ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7 -Wformat -Werror=format-security -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro  -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/target-mipsel_24kc_musl/usr/include -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/target-mipsel_24kc_musl/include -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/usr/include -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/include/fortify -I/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/include " LDFLAGS="-L/data00/home/j3l11234/openwrt/openwrt/staging_dir/target-mipsel_24kc_musl/usr/lib -L/data00/home/j3l11234/openwrt/openwrt/staging_dir/target-mipsel_24kc_musl/lib -L/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/usr/lib -L/data00/home/j3l11234/openwrt/openwrt/staging_dir/toolchain-mipsel_24kc_gcc-7.5.0_musl/lib -znow -zrelro " make  -C /data00/home/j3l11234/openwrt/openwrt/build_dir/target-mipsel_24kc_musl/ipt2socks/ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7/. AR="mipsel-openwrt-linux-musl-gcc-ar" AS="mipsel-openwrt-linux-musl-gcc -c -Os -pipe -mno-branch-likely -mips32r2 -mtune=24kc -fno-caller-saves -fno-plt -fhonour-copts -Wno-error=unused-but-set-variable -Wno-error=unused-result -msoft-float -iremap/data00/home/j3l11234/openwrt/openwrt/build_dir/target-mipsel_24kc_musl/ipt2socks/ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7:ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7 -Wformat -Werror=format-security -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro" LD=mipsel-openwrt-linux-musl-ld NM="mipsel-openwrt-linux-musl-gcc-nm" CC="mipsel-openwrt-linux-musl-gcc" GCC="mipsel-openwrt-linux-musl-gcc" CXX="mipsel-openwrt-linux-musl-g++" RANLIB="mipsel-openwrt-linux-musl-gcc-ranlib" STRIP=mipsel-openwrt-linux-musl-strip OBJCOPY=mipsel-openwrt-linux-musl-objcopy OBJDUMP=mipsel-openwrt-linux-musl-objdump SIZE=mipsel-openwrt-linux-musl-size CROSS="mipsel-openwrt-linux-musl-" ARCH="mipsel" ;
make[4]: Entering directory '/data00/home/j3l11234/openwrt/openwrt/build_dir/target-mipsel_24kc_musl/ipt2socks/ipt2socks-1.1.2-cfbc2189356aba7fcafb0bc961a95419f313d8a7'
mipsel-openwrt-linux-musl-gcc -std=c99 -Wall -Wextra -O3 -pthread -c ipt2socks.c -o ipt2socks.o
cc1: note: someone does not honour COPTS correctly, passed 0 times
ipt2socks.c: In function 'tcp_stream_payload_forward_cb':
ipt2socks.c:729:25: warning: implicit declaration of function 'splice'; did you mean 'pipe'? [-Wimplicit-function-declaration]
         ssize_t nrecv = splice(self_watcher->fd, NULL, self_pipefd[1], NULL, TCP_SPLICE_MAXLEN, SPLICE_F_MOVE | SPLICE_F_NONBLOCK);
                         ^~~~~~
                         pipe
ipt2socks.c:729:97: error: 'SPLICE_F_MOVE' undeclared (first use in this function)
         ssize_t nrecv = splice(self_watcher->fd, NULL, self_pipefd[1], NULL, TCP_SPLICE_MAXLEN, SPLICE_F_MOVE | SPLICE_F_NONBLOCK);
                                                                                                 ^~~~~~~~~~~~~
ipt2socks.c:729:97: note: each undeclared identifier is reported only once for each function it appears in
ipt2socks.c:729:113: error: 'SPLICE_F_NONBLOCK' undeclared (first use in this function); did you mean 'SOCK_NONBLOCK'?
         ssize_t nrecv = splice(self_watcher->fd, NULL, self_pipefd[1], NULL, TCP_SPLICE_MAXLEN, SPLICE_F_MOVE | SPLICE_F_NONBLOCK);
                                                                                                                 ^~~~~~~~~~~~~~~~~
                                                                                                                 SOCK_NONBLOCK
Makefile:28: recipe for target 'ipt2socks.o' failed
make[4]: *** [ipt2socks.o] Error 1

ubuntu18.04下编译失败

系统为Ubuntu 18.04.3 LTS,安装了libuv0.10-dev 0.10.36-4(apt-get install libuv-dev)后编译失败,错误信息的最后部分摘抄如下:

In file included from netutils.h:10:0,
                 from lrucache.h:6,
                 from ipt2socks.c:3:
/usr/include/uv.h:1242:15: note: expected ‘uv_timer_cb {aka void (*)(struct uv_timer_s *, int)}’ but argument is of type ‘void (*)(uv_timer_t *) {aka void (*)(struct uv_timer_s *)}’
 UV_EXTERN int uv_timer_start(uv_timer_t* handle,
               ^~~~~~~~~~~~~~
In file included from lrucache.h:6:0,
                 from ipt2socks.c:3:
netutils.h:149:39: error: incompatible type for argument 1 of ‘uv_strerror’
 #define errstring(errnum) uv_strerror(-(errnum))
                                       ^
logutils.h:24:19: note: in expansion of macro ‘errstring’
                 ##__VA_ARGS__);                                              \
                   ^~~~~~~~~~~
ipt2socks.c:1243:9: note: in expansion of macro ‘LOGERR’
         LOGERR("[udp_client_recv_cb] failed to send data to local client: (%d) %s", errno, errstring(errno));
         ^~~~~~
In file included from netutils.h:10:0,
                 from lrucache.h:6,
                 from ipt2socks.c:3:
/usr/include/uv.h:450:23: note: expected ‘uv_err_t {aka struct uv_err_s}’ but argument is of type ‘int’
 UV_EXTERN const char* uv_strerror(uv_err_t err);
                       ^~~~~~~~~~~
Makefile:26: recipe for target 'ipt2socks.o' failed
make: *** [ipt2socks.o] Error 1

重新安装了libuv1-dev 1.18.0-3(apt-get install libuv1-dev)后再次编译失败,错误信息如下:

gcc -std=c99 -Wall -Wextra -O3 -pthread  -c ipt2socks.c -o ipt2socks.o
ipt2socks.c: In function ‘udp_socks5_resp_read_cb’:
ipt2socks.c:1061:13: warning: implicit declaration of function ‘uv_udp_connect’; did you mean ‘uv_tcp_connect’? [-Wimplicit-function-declaration]
     nread = uv_udp_connect(udp_handle, (void *)&server_skaddr);
             ^~~~~~~~~~~~~~
             uv_tcp_connect
gcc -std=c99 -Wall -Wextra -O3 -pthread  -c lrucache.c -o lrucache.o
gcc -std=c99 -Wall -Wextra -O3 -pthread  -c netutils.c -o netutils.o
gcc -std=c99 -Wall -Wextra -O3 -pthread  -s -o ipt2socks ipt2socks.o lrucache.o netutils.o  -luv
ipt2socks.o: In function `udp_socks5_resp_read_cb':
ipt2socks.c:(.text+0x4be9): undefined reference to `uv_udp_connect'
ipt2socks.c:(.text+0x4e26): undefined reference to `uv_udp_connect'
collect2: error: ld returned 1 exit status
Makefile:23: recipe for target 'ipt2socks' failed
make: *** [ipt2socks] Error 1

nftables全局代理配置寻求帮助

目前使用的openwrt用的fw4,基于hev-socks5-tproxy的nftables配置、rule/route配置,可以跑通,但还想对比下ipt2tables。
hst那边文档写得比较齐全,网络基础没那么好,没办法基于ss-tproxy的配置转化成nft配置,希望您能针对小白完善下全局代理的设置方法,万分感谢!
下面是目前的设置方法,启动ipt2tables后一直报错

2024-01-02 17:31:29 ERR: [new_nonblock_sockfd] socket(AF_INET, SOCK_STREAM): No file descriptors available
2024-01-02 17:31:29 ERR: [set_tcp_nodelay] setsockopt(-1, TCP_NODELAY): Bad file descriptor
2024-01-02 17:31:29 ERR: [set_tcp_quickack] setsockopt(-1, TCP_QUICKACK): Bad file descriptor
2024-01-02 17:31:29 ERR: [set_tcp_keepalive] setsockopt(-1, SO_KEEPALIVE): Bad file descriptor
2024-01-02 17:31:29 ERR: [tcp_tproxy_accept_cb] connect to 121.37.247.85#30001: Bad file descriptor
2024-01-02 17:31:29 ERR: [tcp_socks5_recv_authresp_cb] recv from 121.37.247.85#30001: Connection reset by peer
[tcp_tproxy_accept_cb] accept tcp4 socket: No file descriptors available

  1. nft配置:
table inet mangle {
	set byp4 {
		typeof ip daddr
		flags interval
		elements = { 0.0.0.0/8, 10.0.0.0/8,
			     127.0.0.0/8, 169.254.0.0/16,
			     172.16.0.0/12, 192.0.0.0/24,
			     192.0.2.0/24, 192.88.99.0/24,
			     192.168.0.0/16, 198.18.0.0/15,
			     198.51.100.0/24, 203.0.113.0/24,
			     224.0.0.0/4, 240.0.0.0/4 }
	}

	set byp6 {
		typeof ip6 daddr
		flags interval
		elements = { ::,
			     ::1,
			     ::ffff:0:0:0/96,
			     64:ff9b::/96,
			     100::/64,
			     2001::/32,
			     2001:20::/28,
			     2001:db8::/32,
			     2002::/16,
			     fc00::/7,
			     fe80::/10,
			     ff00::/8 }
	}

	chain prerouting {
		type filter hook prerouting priority mangle; policy accept;
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		udp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		udp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		udp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		udp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
		udp dport 0-65535 tproxy to :1088 meta mark set 0x00000440 accept
	}

	chain output {
		type route hook output priority mangle; policy accept;
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 meta mark set 0x00000440
		udp dport 0-65535 meta mark set 0x00000440
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 meta mark set 0x00000440
		udp dport 0-65535 meta mark set 0x00000440
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 meta mark set 0x00000440
		udp dport 0-65535 meta mark set 0x00000440
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 meta mark set 0x00000440
		udp dport 0-65535 meta mark set 0x00000440
		ip daddr @byp4 return
		ip6 daddr @byp6 return
		tcp dport 0-65535 meta mark set 0x00000440
		udp dport 0-65535 meta mark set 0x00000440
	}
}
  1. 路由配置
    ip rule add fwmark 1088 table 100
    ip route add local default dev lo table 100

  2. 启动脚本
    ./ipt2socks -s 111.111.111.111 -p 30001 -a uid -k pwd

请各位大佬帮我看看为什么用ipt2socks转UDP出错,xray转就没问题

我在用ip2socks转发udp的时候出现错误,不知道哪里出错,请各位大佬帮我看看,谢谢!

1.tproxy 机 192.168.1.2 debian9 x64
2.brook 机 192.168.1.5 win10 x64
3.udp 请求机 192.168.1.9 win7 x64

我是在3 机使用 nslookup www.google.com 8.8.8.8 发出udp 请求,当使用xray 转发的时候一次通过正常。
当使用ip2socks转发的时候出错,nslookup 会发出5次udp 请求。

以下几张图是相关配置和出错信息。

xray-linux-config-a
linux-iptables-a
xray-brook-good-a
ipt2-brook-bad-a
ipt2-brook-bad-linux-a

当socks服务开在本地服务器时,则会陷入socks循环请求,无法正常使用

172.16.1.4 是服务器的IP,在上面开了一个 socks5 172.16.1.4:1080

启动 ipt2socks 用作 REDIRECT 透明代理

ipt2socks -R -u ltproxy -s 172.16.1.4 -p 1080 -l 60633

为了简单,这里只对 113.31.103.132 进行透明代理

iptables -t nat -N LLTPROXY
iptables -t nat -A PREROUTING -p tcp -j LLTPROXY
iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner ltproxy -j LLTPROXY
iptables -t nat -A LLTPROXY -p tcp -d 113.31.103.132 -j REDIRECT --to-port 60633

上面的配置,会陷入socks循环请求,无法正常使用

当不是使用本地搭建的socks时,使用外部的 socks 又没有问题,请问怎么解决?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.