Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Classic Network Utilities

News See Also Recommended Links Recommended Articles Reference Network Troubleshooting Tools TCPreplay
ifconfig ethtool ndd solaris route Linux route command netstat Wireshark
ping traceroute mtr Snoop Tcpdump ngrep netcat
nmap ntop rsync snort Packet Generation Tools Humor Etc

There are at lease a dozen of classic network tools. Among them are:

Several network utilities are can pipable (netcat can create pipelines across network):

In Jan 2007 I added the page about TCP/IP troubleshooting tools to complement this page

A very good list of free network tools can be found at Top 100 Network Security Tools

Dr. Nikolai Bezroukov


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Nov 15, 2018] 10 Linux Commands For Network Diagnostics - LinuxAndUbuntu - Linux News FOSS Reviews Linux Tutorials HowTo

Nov 15, 2018 | www.linuxandubuntu.com
10 Linux Commands For Network Diagnostics

15/11/2018

Comments 10 Linux Commands For Network Diagnostics It is difficult to find a Linux computer that is not connected to the network , be it server or workstation. From time to time it becomes necessary to diagnose faults, intermittence or slowness in the network. In this article, we will review some of the Linux commands most used for network diagnostics. 10 Linux Commands For Network Diagnostics 1. ping One of the first commands, if not the first one, when diagnosing a network failure or intermittence. The ping tool will help us determine if there is a connection in the network, be it local or the Internet.

[root @ horla] # ping www.linuxandubuntu.com
PING www.linuxandubuntu.com (173.274.34.38) 56 (84) bytes of data.
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 1 ttl = 59 time = 2.52 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 2 ttl = 59 time = 2.26 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 3 ttl = 59 time = 2.31 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 4 ttl = 59 time = 2.36 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 5 ttl = 59 time = 2.33 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 6 ttl = 59 time = 2.24 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 7 ttl = 59 time = 2.35 ms
2. traceroute This command allows us to see the jumps that are needed to reach a destination. In this case, we see the jumps that are required to reach our website. This test was done from a laptop with Linux. In the example, we make a traceroute to our website, www.linuxandubuntu.com.
horla @ horla-ProBook: ~ $ traceroute www.linuxandubuntu.com
traceroute to www.linuxandubuntu.com (173.274.34.38), 30 hops max, 60 byte packets
 1 linuxandubuntu.com (192.168.1.1) 267,686 ms 267,656 ms 267,616 ms
 2 10.104.0.1 (10.104.0.1) 267.630 ms 267.579 ms 267.553 ms
 3 10,226,252,209 (10,226,252,209) 267,459 ms 267,426 ms 267,396 ms
 4 * * *
 5 10,111.2,137 (10,111.2,137) 266,913 ms 10,111.2,141 (10,111.2,141) 266,784 ms 10,111.2,101 (10,111.2,101) 266,678 ms
 6 5.53.0.149 (5.53.0.149) 266.594 ms 104.340 ms 104.273 ms
 7 5.53.3.155 (5.53.3.155) 135.133 ms 94.142.98.147 (94.142.98.147) 135.055 ms 176.52.255.35 (176.52.255.35) 135.069 ms
 8 94,142,127,229 (94,142,127,229) 197,890 ms 5.53.6.49 (5.53.6.49) 197,850 ms 94,142,126,161 (94,142,126,161) 223,327 ms
 9 ae-11.r07.nycmny01.us.bb.gin.ntt.net (129.250.9.1) 197.702 ms 197.715 ms 180.145 ms
10 * * *
11 csc180.gsc.webair.net (173.239.0.26) 179.719 ms 149.475 ms 149.383 ms
12 dsn010.gsc.webair.net (173.239.0.34) 149.288 ms 168.309 ms 168.202 ms
13 r4-nyc.webserversystems.com (173.274.34.38) 168.086 ms 168.105 ms 142.733 ms
horla @ horla-ProBook: ~ $
3. route This command allows us to see the route that our Linux team uses to connect to the network, in this case. Our equipment leaves through router 192.168.1.1.
horla @ horla-ProBook: ~ $ route -n 
Core IP route table
Destination Gateway Genmask Indic Metric Ref Use Interface
0.0.0.0 192.168.1.1 0.0.0.0 UG 600 0 0 wlo1
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlo1
192.168.1.0 0.0.0.0 255.255.255.0 U 600 0 0 wlo1
horla @ horla-ProBook: ~ $
4. dig This command allows us to verify if the DNS is working correctly, before that, we must verify which DNS we have in the network configuration. In this example, we want to see the IP address of our website, www.linuxandubuntu.com which returns us 173.274.34.38.
horla-ProBook: ~ $ dig www.linuxandubuntu.com
; << >> DiG 9.10.3-P4-Ubuntu << >> www.linuxandubuntu.com ;; global options: + cmd ;; Got answer: ;; - >> HEADER << - opcode: QUERY, status: NOERROR, id: 12083 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:; www.linuxandubuntu.com. IN A
;; ANSWER SECTION: www.linuxandubuntu.com. 2821 IN A 173.274.34.38
;; Query time: 21 msec ;; SERVER: 127.0.1.1 # 53 (127.0.1.1) ;; WHEN: Wed Nov 7 19:58:30 PET 2018 ;; MSG SIZE rcvd: 51
horla @ horla-ProBook: ~ $
5. ethtool This tool is a replacement for mii-tool. It comes from CentOS6 onwards and allows to see if the network card is physically connected to the network, that is. We can diagnose if the network cable is actually connected to the switch.
# ethtool eth0
Settings for eth0: Supported ports: [] 
Supported link modes: Not reported 
Supported pause frame use: No 
Supports auto-negotiation: No Advertised 
link modes: Not reported 
Advertised pause frame use: No 
Advertised auto-negotiation: No 
Speed: Unknown! Duplex: Unknown! (255)
Port: Other PHYAD: 0 
Transceiver: internal 
Auto-negotiation: off 
Link detected: yes
6. IP ADDR LS Another of the specific tools of Linux that allows us to list the network cards and their respective IP addresses. This tool is very useful when you have several IP addresses configured.
[root@linux named]# ip addr ls
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
2: eth6:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 00:15:5d:a0:f6:05 brd ff:ff:ff:ff:ff:ff
 inet 193.82.34.169/27 brd 190.82.35.192 scope global eth6
 inet 192.168.61.10/24 brd 192.168.61.255 scope global eth6:1
 inet6 fe80::215:5dff:fea0:f605/64 scope link 
 valid_lft forever preferred_lft forever
7. ifconfig As essential as the previous ones, ifconfig allows us to see the network configuration of the cards installed in our team. In this case, 1 physical network card disconnected in p37s0, the local network card or localhost lo and the wireless network card wlo1 which is connected to the network is shown. We intentionally highlight the installed cards and the assigned IP addresses.
horla @ horla-ProBook: ~ $ ifconfig
 enp37s0 Link encap: Ethernet addressHW 2c: 41: 38: 15: 4b: 0e
 ACTIVE MULTICAST MTU DISTRIBUTION: 1500 Metric: 1
 RX packages: 0 errors: 0 lost: 0 overruns: 0 frame: 0
 TX packages: 0 errors: 0 lost: 0 overruns: 0 carrier: 0
 collisions: 0 long.colaTX: 1000 
 Bytes RX: 0 (0.0 B) TX bytes: 0 (0.0 B)

lo Link encap: Local loop
 Direc. inet: 127.0.0.1 Másc: 255.0.0.0
 Inet6 address: :: 1/128 Scope: Host
 ACTIVE LOOP RUNNING MTU: 65536 Metric: 1
 RX packages: 19095 errors: 0 lost: 0 overruns: 0 frame: 0
 TX packages: 19095 errors: 0 lost: 0 overruns: 0 carrier: 0
 Collisions: 0 long.colaTX: 1 
 Bytes RX: 1716020 (1.7 MB) TX bytes: 1716020 (1.7 MB)

wlo1 Link encap: Ethernet addressHW 20: 10: 7a: fc: b1: 44
 Direc. inet: 192.168.1.102 Difus.:192.168.1.255 Masc: 255.255.255.0
 Inet6 address: fe80 :: 2b5d: 1b14: 75a: e095 / 64 Scope: Link
 ACTIVE DIFFUSION FUNCTIONING MULTICAST MTU: 1500 Metric: 1
 RX packages: 1660063 errors: 0 lost: 0 overruns: 0 frame: 0
 TX packages: 1285046 errors: 0 lost: 0 overruns: 0 carrier: 0
 collisions: 0 long.colaTX: 1000 
 Bytes RX: 966719020 (966.7 MB) TX bytes: 209302107 (209.3 MB)

horla @ horla-ProBook: ~ $
8. mtr Another one of our favorite tools MTR or My Traceroute allows us to see the router jumps and ping each one. This is very useful to determine which of these routers are those that have delays in network traffic.
                              My traceroute [v0.75]                                                               

 My traceroute [v0.75] router02 (0.0.0.0) Nov 7 20:19:24 2018Resolver: Received error response 2. (server failure) er of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev 
1. router2-linuxandubuntu.com 0.0% 11 0.7 0.7 0.6 0.8 0.1 
2. 173.255.239.16 0.0% 11 0.8 0.9 0.8 1.6 0.2 
3. 173.255.239.8 0.0% 11 2.9 3.2 0.8 7.8 2.1 
4. ??? 
5. es0.nyc4.webair.net 0.0% 10 2.0 2.6 1.8 7.7 1.8 
6. csc180.gsc.webair.net 0.0% 10 2.6 2.6 2.6 2.7 0.1 
7. dsn010.gsc.webair.net 0.0% 10 2.2 2.2 2.1 2.3 0.1 
8. r4-nyc.webserversystems.com 0.0% 10 2.3 2.4 2.2 2.5 0.1

9. nslookup Another tool to know the IP address of the host we want to reach. In this case, we want to know the IP of our website, www.linuxandubuntu.com.
# nslookup www.linuxandubuntu.com
Server: 127.0.0.1
Address: 127.0.0.1 # 53
Non-authoritative answer:
Name: www.linuxandubuntu.com
Address: 173.274.34.38
10. nmtui-edit Network Manager Text User Interface (nmtui or Network Manager based on command line). It uses ncurses and allows us to easily configure from the terminal and without additional dependencies. It offers a graphical interface, based on text, so that the user makes those modifications. Conclusion With these networking commands , we will have the opportunity to perform a much more direct and precise management on the various parameters of the network in Linux environments. Also With the mtr command as we mention above, we can have a simpler control over the state of our network and check in a much more central way its different aspects focused on its optimization. Thanks for reading.

[Oct 14, 2018] There're a metric fuckton of reports of systemd killing detached/nohup'd processes

Notable quotes:
"... Reading stuff in /proc is a standard mechanism and where appropriate, all the tools are doing the same including 'ss' that you mentioned (which is btw very poorly designed) ..."
Oct 14, 2018 | linux.slashdot.org
Andrew Gryaznov ( 4260673 ) #56684222 )

Total nonesense ( Score: 3 , Interesting)

I am Linux kernel network and proprietary distributions developer and have actually read the code.

Reading stuff in /proc is a standard mechanism and where appropriate, all the tools are doing the same including 'ss' that you mentioned (which is btw very poorly designed)

Also there are several implementations of the net tools, the one from busybox probably the most famous alternative one and implementations don't hesitate changing how, when and what is being presented.

What is true though is that Linux kernel APIs are sometimes messy and tools like e.g. pyroute2 are struggling with working around limitations and confusions. There is also a big mess with the whole netfilter package as the only "API" is the iptables command-line tool itself.

Linux is arguably the biggest and most important project on Earth and should respect all views, races and opinions. If you would like to implement a more efficient and streamlined network interface (which I very much beg for and may eventually find time to do) - then I'm all in with you. I have some ideas of how to make the interface programmable by extending JIT rules engine and making possible to implement the most demanding network logic in kernel directly (e.g. protocols like mptcp and algorithms like Google Congestion Control for WebRTC).

adosch ( 1397357 ) , Sunday May 27, 2018 @04:04PM ( #56684724 )
Thats... the argument? FML ( Score: 4 , Interesting)

The OP's argument is that netlink sockets are more efficient in theory so we should abandon anything that uses a pseudo-proc, re-invent the wheel and move even farther from the UNIX tradition and POSIX compliance? And it may be slower on larger systems? Define that for me because I've never experienced that. I've worked on single stove-pipe x86 systems, to the 'SPARC archteciture' generation where everyone thought Sun/Solaris was the way to go with single entire systems in a 42U rack, IRIX systems, all the way on hundreds of RPM-base linux distro that are physical, hypervised and containered nodes in an HPC which are LARGE compute systems (fat and compute nodes).

That's a total shit comment with zero facts to back it up. This is like Good Will Hunting 'the bar scene' revisited...

OP, if you're an old hat like me, I'd fucking LOVE to know how old? You sound like you've got about 5 days soaking wet under your belt with a Milkshake IPA in your hand. You sound like a millennial developer-turned-sysadmin-for-a-day who's got all but cloud-framework-administration under your belt and are being a complete poser. Any true sys-admin is going to flip-their-shit just like we ALL did with systemd, and that shit still needs to die. There, I got that off my chest.

I'd say you got two things right, but are completely off on one of them:

* Your description of inefficient is what you got right: you sound like my mother or grandmother describing their computing experiences to look at Pintrest on a web brower at times. You mind as well just said slow without any bearing, education guess or reason. Sigh...

* I would agree that some of these tools need to change, but only to handle deeper kernel containerization being built into Linux. One example that comes to mind is 'hostnamectl' where it's more dev-ops centric in terms of 'what' slice or provision you're referencing. A lot of those tools like ifconfig, route and alike still do work in any Linux environment, containerized or not --- fuck, they work today .

Anymore, I'm just a disgruntled and I'm sure soon-to-be-modded-down voice on /. that should be taken with a grain of salt. I'm not happy with the way the movements of Linux have gone, and if this doesn't sound old hat I don't know what is: At the end of the day, you have to embrace change. I'd say 0.0001% of any of us are in control of those types of changes, no matter how we feel about is as end-user administrators of those tools we've grown to be complacent about. I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with.

Greyfox ( 87712 ) writes:
Re: ( Score: 3 )

Yeah. The other day I set up some demo video streaming on a Linux box. Fire up screen, start my streaming program. Disconnect screen and exit my ssh system, and my streaming freezes. There're a metric fuckton of reports of systemd killing detached/nohup'd processes, but I check my config file and it's not that. Although them being that willing to walk away from expected system behavior is already cause to blow a gasket. But no, something else is going on here. I tweak the streaming code to catch all catchab

jgotts ( 2785 ) writes: < jgottsNO@SPAMgmail.com > on Sunday May 27, 2018 @06:17PM ( #56685380 )
Some historical color ( Score: 4 , Interesting)

Just to give you guys some color commentary, I was participating quite heavily in Linux development from 1994-1999, and Linus even added me to the CREDITS file while I was at the University of Michigan for my fairly modest contributions to the kernel. [I prefer application development, and I'm still a Linux developer after 24 years. I currently work for the company Internet Brands.]

What I remember about ip and net is that they came about seemingly out of nowhere two decades ago and the person who wrote the tools could barely communicate in English. There was no documentation. net-tools by that time was a well-understood and well-documented package, and many Linux devs at the time had UNIX experience pre-dating Linux (which was announced in 1991 but not very usable until 1994).

We Linux developers virtually created Internet programming, where most of our effort was accomplished online, but in those days everybody still used books and of course the Linux Documentation Project. I have a huge stack of UNIX and Linux books from the 1990's, and I even wrote a mini-HOWTO. There was no Google. People who used Linux back then may seem like wizards today because we had to memorize everything, or else waste time looking it up in a book. Today, even if I'm fairly certain I already know how to do something, I look it up with Google anyway.

Given that, ip and net were downright offensive. We were supposed to switch from a well-documented system to programs written by somebody who can barely speak English (the lingua franca of Linux development)?

Today, the discussion is irrelevant. Solaris, HP-UX, and the other commercial UNIX versions are dead. Ubuntu has the common user and CentOS has the server. Google has complete documentation for these tools at a glance. In my mind, there is now no reason to not switch.

Although, to be fair, I still use ifconfig, even if it is not installed by default.

[Oct 14, 2018] The problem isn't so much new tools as new tools that suck

Systemd looks OK until you get into major troubles and start troubleshooting. After that you are ready to kill systemd developers and blow up Red Hat headquarters ;-)
Notable quotes:
"... Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior. ..."
Oct 14, 2018 | linux.slashdot.org

drinkypoo ( 153816 ) writes: < martin.espinoza@gmail.com > on Sunday May 27, 2018 @11:14AM ( #56683018 ) Homepage Journal

Re:That would break scripts which use the UI ( Score: 5 , Informative)
In general, it's better for application programs, including scripts to use an application programming interface (API) such as /proc, rather than a user interface such as ifconfig, but in reality tons of scripts do use ifconfig and such.

...and they have no other choice, and shell scripting is a central feature of UNIX.

The problem isn't so much new tools as new tools that suck. If I just type ifconfig it will show me the state of all the active interfaces on the system. If I type ifconfig interface I get back pretty much everything I want to know about it. If I want to get the same data back with the ip tool, not only can't I, but I have to type multiple commands, with far more complex arguments.

The problem isn't new tools. It's crap tools.

gweihir ( 88907 ) , Sunday May 27, 2018 @12:22PM ( #56683440 )
Re:That would break scripts which use the UI ( Score: 5 , Insightful)
The problem isn't new tools. It's crap tools.

Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior.

Anonymous Coward , Sunday May 27, 2018 @02:00PM ( #56684068 )
Re:That would break scripts which use the UI ( Score: 5 , Interesting)
The problem isn't new tools. It's crap tools.

The problem isn't new tools. It's not even crap tools. It's the mindset that we need to get rid of an ~70KB netstat, ~120KB ifconfig, etc. Like others have posted, this has more to do with the ego of the new tools creators and/or their supporters who see the old tools as some sort of competition. Well, that's the real problem, then, isn't it? They don't want to have to face competition and the notion that their tools aren't vastly superior to the user to justify switching completely, so they must force the issue.

Now, it'd be different if this was 5 years down the road, netstat wasn't being maintained*, and most scripts/dependents had already been converted over. At that point there'd be a good, serious reason to consider removing an outdated package. That's obviously not the debate, though.

* Vs developed. If seven year old stable tools are sufficiently bug free that no further work is necessary, that's a good thing.

locofungus ( 179280 ) , Sunday May 27, 2018 @02:46PM ( #56684296 )
Re:That would break scripts which use the UI ( Score: 4 , Informative)
If I type ifconfig interface I get back pretty much everything I want to know about it

How do you tell in ifconfig output which addresses are deprecated? When I run ifconfig eth0.100 it lists 8 global addreses. I can deduce that the one with fffe in the middle is the permanent address but I have no idea what the address it will use for outgoing connections.

ip addr show dev eth0.100 tells me what I need to know. And it's only a few more keystrokes to type.

Anonymous Coward , Sunday May 27, 2018 @11:13AM ( #56683016 )
Re:So ( Score: 5 , Insightful)

Following the systemd model, "if it aint broken, you're not trying hard enough"...

Anonymous Coward , Sunday May 27, 2018 @11:35AM ( #56683144 )
That's the reason ( Score: 5 , Interesting)

It done one thing: Maintain the routing table.

"ip" (and "ip2" and whatever that other candidate not-so-better not-so-replacement of ifconfig was) all have the same problem: They try to be the one tool that does everything "ip". That's "assign ip address somewhere", "route the table", and all that. But that means you still need a complete zoo of other tools, like brconfig, iwconfig/iw/whatever-this-week.

In other words, it's a modeling difference. On sane systems, ifconfig _configures the interface_, for all protocols and hardware features, bridges, vlans, what-have-you. And then route _configures the routing table_. On linux... the poor kids didn't understand what they were doing, couldn't fix their broken ifconfig to save their lives, and so went off to reinvent the wheel, badly, a couple times over.

And I say the blogposter is just as much an idiot.

Per various people, netstat et al operate by reading various files in /proc, and doing this is not the most efficient thing in the world

So don't use it. That does not mean you gotta change the user interface too. Sheesh.

However, the deeper issue is the interface that netstat, ifconfig, and company present to users.

No, that interface is a close match to the hardware. Here is an interface, IOW something that connects to a radio or a wire, and you can make it ready to talk IP (or back when, IPX, appletalk, and whatever other networks your system supported). That makes those tools hardware-centric. At least on sane systems. It's when you want to pretend shit that it all goes awry. And boy, does linux like to pretend. The linux ifconfig-replacements are IP-only-stack-centric. Which causes problems.

For example because that only does half the job and you still need the aforementioned zoo of helper utilities that do things you can have ifconfig do if your system is halfway sane. Which linux isn't, it's just completely confused. As is this blogposter.

On the other hand, the users expect netstat, ifconfig and so on to have their traditional interface (in terms of output, command line arguments, and so on); any number of scripts and tools fish things out of ifconfig output, for example.

linux' ifconfig always was enormously shitty here. It outputs lots of stuff I expect to find through netstat and it doesn't output stuff I expect to find out through ifconfig. That's linux, and that is NOT "traditional" compared to, say, the *BSDs.

As the Linux kernel has changed how it does networking, this has presented things like ifconfig with a deep conflict; their traditional output is no longer necessarily an accurate representation of reality.

Was it ever? linux is the great pretender here.

But then, "linux" embraced the idiocy oozing out of poettering-land. Everything out of there so far has caused me problems that were best resolved by getting rid of that crap code. Point in case: "Network-Manager". Another attempt at "replacing ifconfig" with something that causes problems and solves very few.

locofungus ( 179280 ) , Sunday May 27, 2018 @03:27PM ( #56684516 )
Re:That's the reason ( Score: 4 , Insightful)
It done one thing: Maintain the routing table.

Should the ip rule stuff be part of route or a separate command?

There are things that could be better with ip. IIRC it's very fussy about where the table selector goes in the argument list but route doesn't support this at all.

I also don't think route has anything like 'nexthop dev $if' which is a godsend for ipv6 configuration.

I stayed with route for years. But ipv6 exposed how incomplete the tool is - and clearly nobody cares enough to add all the missing functionality.

Perhaps ip addr, ip route, ip rule, ip mroute, ip link should be separate commands. I've never looked at the sourcecode to see whether it's mostly common or mostly separate.

Anonymous Coward writes:
Re: That's the reason ( Score: 3 , Informative)

^this^

The people who think the old tools work fine don't understand all the advanced networking concepts that are only possible with the new tools: interfaces can have multiple IPs, one IP can be assigned to multiple interfaces, there's more than one routing table, firewall rules can add metadata to packets that affects routing, etc. These features can't be accommodated by the old tools without breaking compatibility.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @09:11PM ( #56686032 )
Re:That's the reason ( Score: 3 )
Someone cared enough to implement an entirely different tool to do the same old jobs plus some new stuff, it's too bad they didn't do the sane thing and add that functionality to the old tool where it would have made sense.

It's not that simple. The iproute2 suite wasn't written to *replace* anything.
It was written to provide a user interface to the rapidly expanding RTNL API.
The net-tools maintainers (or anyone who cared) could have started porting it if they liked. They didn't. iproute2 kept growing to provide access to all the new RTNL interfaces, while net-tools got farther and farther behind.
What happened was organic. If someone brought net-tools up to date tomorrow and everyone liked the interface, iproute2 would be dead in its tracks. As it sits, myself, and most of the more advanced level system and network engineers I know have been using iproute2 for just over a decade now (really, the point where ifconfig became on incomplete and poorly simplified way to manage the networking stack)

DamnOregonian ( 963763 ) , Monday May 28, 2018 @02:26AM ( #56686960 )
Re:That's the reason ( Score: 4 , Informative)

Nope. Kernel authors come up with fancy new netlink interface for better interaction with the kernel's network stack. They don't give two squirts of piss whether or not a user-space interface exists for it yet. Some guy decides to write an interface to it. Initially, it only support things like modifying the routing rule database (something that can't be done with route) and he is trying to make an implementation of this protocal, not try to hack it into software that already has its own framework using different APIs.
This source was always freely available for the net-tools guys to take and add to their own software.
Instead, we get this. [sourceforge.net]
Nobody is giving a positive spin. This is simply how it happened. This is what happens when software isn't maintained, and you don't get to tell other people to maintain it. You're free, right now, today, to port the iproute2 functionality into net-tools. They're unwilling to, however. That's their right. It's also the right of other people to either fork it, or move to more functional software. It's your right to help influence that. Or bitch on slashdot. That probably helps, too.

TeknoHog ( 164938 ) writes:
Re: ( Score: 2 )
keep the command names the same but rewrite how they function?

Well, keep the syntax too, so old scripts would still work. The old command name could just be a script that calls the new commands under the hood. (Perhaps this is just what you meant, but I thought I'd elaborate.)

gweihir ( 88907 ) , Sunday May 27, 2018 @12:18PM ( #56683412 )
Re:So ( Score: 4 , Insightful)
What was the reason for replacing "route" anyhow? It's worked for decades and done one thing.

Idiots that confuse "new" with better and want to put their mark on things. Because they are so much greater than the people that got the things to work originally, right? Same as the systemd crowd. Sometimes, they realize decades later they were stupid, but only after having done a lot of damage for a long time.

TheRaven64 ( 641858 ) writes:
Re: ( Score: 2 )

I didn't RTFA (this is Slashdot, after all) but from TFS it sounds like exactly the reason I moved to FreeBSD in the first place: the Linux attitude of 'our implementation is broken, let's completely change the interface'. ALSA replacing OSS was the instance of this that pushed me away. On Linux, back around 2002, I had some KDE and some GNOME apps that talked to their respective sound daemon, and some things like XMMS and BZFlag that used /dev/dsp directly. Unfortunately, Linux decided to only support s

zippthorne ( 748122 ) writes:
Re: ( Score: 3 )

On the other hand, on most systems, vi is basically an alias to vim....

goombah99 ( 560566 ) , Sunday May 27, 2018 @11:08AM ( #56682986 )
Bad idea ( Score: 5 , Insightful)

Unix was founded on the ideas of lots os simple command line tools that do one job well and don't depend on system idiosyncracies. If you make the tool have to know the lower layers of the system to exploit them then you break the encapsulation. Polling proc has worked across eons of linux flavors without breaking. when you make everthing integrated it creates paralysis to change down the road for backward compatibility. small speed game now for massive fragility and no portability later.

goombah99 ( 560566 ) writes:
Re: ( Score: 3 )

Gnu may not be unix but it's foundational idea lies in the simple command tool paradigm. It's why GNU was so popular and it's why people even think that Linux is unix. That idea is the character of linux. if you want an marvelously smooth, efficient, consistent integrated system that then after a decade of revisions feels like a knotted tangle of twine in your junk drawer, try Windows.

llamalad ( 12917 ) , Sunday May 27, 2018 @11:46AM ( #56683198 )
Re:Bad idea ( Score: 5 , Insightful)

The error you're making is thinking that Linux is UNIX.

It's not. It's merely UNIX-like. And with first SystemD and now this nonsense, it's rapidly becoming less UNIX-like. The Windows of the UNIX(ish) world.

Happily, the BSDs seem to be staying true to their UNIX roots.

petes_PoV ( 912422 ) , Sunday May 27, 2018 @12:01PM ( #56683282 )
The dislike of support work ( Score: 5 , Interesting)
In theory netstat, ifconfig, and company could be rewritten to use netlink too; in practice this doesn't seem to have happened and there may be political issues involving different groups of developers with different opinions on which way to go.

No, it is far simpler than looking for some mythical "political" issues. It is simply that hackers - especially amateur ones, who write code as a hobby - dislike trying to work out how old stuff works. They like writing new stuff, instead.

Partly this is because of the poor documentation: explanations of why things work, what other code was tried but didn't work out, the reasons for weird-looking constructs, techniques and the history behind patches. It could even be that many programmers are wedded to a particular development environment and lack the skill and experience (or find it beyond their capacity) to do things in ways that are alien to it. I feel that another big part is that merely rewriting old code does not allow for the " look how clever I am " element that is present in fresh, new, software. That seems to be a big part of the amateur hacker's effort-reward equation.

One thing that is imperative however is to keep backwards compatibility. So that the same options continue to work and that they provide the same content and format. Possibly Unix / Linux only remaining advantage over Windows for sysadmins is its scripting. If that was lost, there would be little point keeping it around.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:13PM ( #56685074 )
Re:The dislike of support work ( Score: 5 , Insightful)

iproute2 exists because ifconfig, netstat, and route do not support the full capabilities of the linux network stack.
This is because today's network stack is far more complicated than it was in the past. For very simple networks, the old tools work fine. For complicated ones, you must use the new ones.

Your post could not be any more wrong. Your moderation amazes me. It seems that slashdot is full of people who are mostly amateurs.
iproute2 has been the main network management suite for linux amongst higher end sysadmins for a decade. It wasn't written to sate someone's desire to change for the sake of change, to make more complicated, to NIH. It was written because the old tools can't encompass new functionality without being rewritten themselves.

Craig Cruden ( 3592465 ) , Sunday May 27, 2018 @12:11PM ( #56683352 )
So windowification (making it incompatible) ( Score: 5 , Interesting)

So basically there is a proposal to dump existing terminal utilities that are cross-platform and create custom Linux utilities - then get rid of the existing functionality? That would be moronic! I already go nuts remoting into a windows platform and then an AIX and Linux platform and having different command line utilities / directory separators / etc. Adding yet another difference between my Linux and macOS/AIX terminals would absolutely drive me bonkers!

I have no problem with updating or rewriting or adding functionalities to existing utilities (for all 'nix platforms), but creating a yet another incompatible platform would be crazily annoying.

(not a sys admin, just a dev who has to deal with multiple different server platforms)

Anonymous Coward , Sunday May 27, 2018 @12:16PM ( #56683388 )
Output for 'ip' is machine readable, not human ( Score: 5 , Interesting)

All output for 'ip' is machine readable, not human.
Compare
$ ip route
to
$ route -n

Which is more readable? Fuckers.

Same for
$ ip a
and
$ ifconfig
Which is more readable? Fuckers.

The new commands should generally make the same output as the old, using the same options, by default. Using additional options to get new behavior. -m is commonly used to get "machine readable" output. Fuckers.

It is like the systemd interface fuckers took hold of everything. Fuckers.

BTW, I'm a happy person almost always, but change for the sake of change is fucking stupid.

Want to talk about resolv.conf, anyone? Fuckers! Easier just to purge that shit.

SigmundFloyd ( 994648 ) , Sunday May 27, 2018 @12:39PM ( #56683558 )
Linux' userland is UNSTABLE ! ( Score: 3 )

I'm growing increasingly annoyed with Linux' userland instability. Seriously considering a switch to NetBSD because I'm SICK of having to learn new ways of doing old things.

For those who are advocating the new tools as additions rather than replacements: Remember that this will lead to some scripts expecting the new tools and some other scripts expecting the old tools. You'll need to keep both flavors installed to do ONE thing. I don't know about you, but I HATE to waste disk space on redundant crap.

fluffernutter ( 1411889 ) , Sunday May 27, 2018 @12:46PM ( #56683592 )
Piss and vinigar ( Score: 5 , Interesting)

What pisses me off is when I go to run ifconfig and it isn't there, and then I Google on it and there doesn't seem to be *any* direct substitute that gives me the same information. If you want to change the command then fine, but allow the same output from the new commands. Furthermore, another bitch I have is most systemd installations don't have an easy substitute for /etc/rc.local.

what about ( 730877 ) , Sunday May 27, 2018 @01:35PM ( #56683874 ) Homepage
Let's try hard to break Linux ( Score: 3 , Insightful)

It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap.

Therefore, the only logical alternative is that they are paid (in some way) to break what is working.

Also, if you rewrite tons of systems tools you have plenty of opportunities to insert useful bugs that can be used by the various spying agencies.

You do not think that the current CPU Flaws are just by chance, right ?
Immagine the wonder of being able to spy on any machine, regardless of the level of SW protection.

There is no need to point out that I cannot prove it, I know, it just make sense to me.

Kjella ( 173770 ) writes:
Re: ( Score: 3 )
It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap. (...) There is no need to point out that I cannot prove it, I know, it just make sense to me.

Many developers fix problems like a guy about to lose a two week vacation because he can't find his passport. Rip open every drawer, empty every shelf, spread it all across the tables and floors until you find it, then rush out the door leaving everything in a mess. It solved HIS problem.

WaffleMonster ( 969671 ) , Sunday May 27, 2018 @01:52PM ( #56684010 )
Changes for changes sake ( Score: 4 , Informative)

TFA is full of shit.

IP aliases have always and still do appear in ifconfig as separate logical interfaces.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Using these false bits of information to advocate for change seems rather ridiculous.

One change I would love to see... "ping" bundled with most Linux distros doesn't support IPv6. You have to call IPv6 specific analogue which is unworkable. Knowing address family in advance is not a reasonable expectation and works contrary to how all other IPv6 capable software any user would actually run work.

Heck for a while traceroute supported both address families. The one by Olaf Kirch eons ago did then someone decided not invented here and replaced it with one that works like ping6 where you have to call traceroute6 if you want v6.

It seems anymore nobody spends time fixing broken shit... they just spend their time finding new ways to piss me off. Now I have to type journalctl and wait for hell to freeze over just to liberate log data I previously could access nearly instantaneously. It almost feels like Microsoft's event viewer now.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:30PM ( #56685130 )
Re:Changes for changes sake ( Score: 4 , Insightful)
TFA is full of shit. IP aliases have always and still do appear in ifconfig as separate logical interfaces.

No, you're just ignorant.
Aliases do not appear in ifconfig as separate logical interfaces.
Logical interfaces appear in ifconfig as logical interfaces.
Logical interfaces are one way to add an alias to an interface. A crude way, but a way.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Nope. Again, your'e just ignorant.

root@swalker-samtop:~# tunctl
Set 'tap0' persistent and owned by uid 0
root@swalker-samtop:~# ifconfig tap0 10.10.10.1 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.10.2/24 dev tap0
root@swalker-samtop:~# ifconfig tap0:0 10.10.10.3 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.1.1/24 scope link dev tap0:0
root@swalker-samtop:~# ifconfig tap0 | grep inet
inet 10.10.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
root@swalker-samtop:~# ifconfig tap0:0 | grep inet
inet 10.10.10.3 netmask 255.255.255.0 broadcast 10.10.10.255
root@swalker-samtop:~# ip addr show dev tap0 | grep inet
inet 10.10.1.1/24 scope link tap0
inet 10.10.10.1/24 brd 10.10.10.255 scope global tap0
inet 10.10.10.2/24 scope global secondary tap0
inet 10.10.10.3/24 brd 10.10.10.255 scope global secondary tap0:0

If you don't understand what the differences are, you really aren't qualified to opine on the matter.
Ifconfig is fundamentally incapable of displaying the amount of information that can go with layer-3 addresses, interfaces, and the architecture of the stack in general. This is why iproute2 exists.

JustNiz ( 692889 ) , Sunday May 27, 2018 @01:55PM ( #56684030 )
I propose a new word: ( Score: 5 , Funny)

SysD: (v). To force an unnecessary replacement of something that already works well with an alternative that the majority perceive as fundamentally worse.
Example usage: Wow you really SysD'd that up.

[Jul 16, 2018] netstat to find ports which are in use on linux server

Another example of more or less complex pipeline using cat
Oct 02, 2008 | midnight-cafe.co.uk

Below is command to find out number of connections to each ports which are in use using netstat & cut.

netstat -nap | grep 'tcp\|udp' | awk '{print $4}' | cut -d: -f2 | sort | uniq -c | sort -n

Below is description of each commands ::

Netstat command is used to check all incoming and outgoing connections on linux server. Using Grep command you can sort lines which are matching pattern you defined.

AWk is very important command generally used for scanning pattern and process it. It is powerful tool for shell scripting. Sort is used to sort output and sort -n is for sorting output in numeric order.

Uniq -c this help to get uniq output by deleting duplicate lines from it.

[Jul 16, 2018] Listing TCP apps listening on ports

Jun 13, 2018 | www.fredshack.com

netstat -nltp

[Jan 14, 2018] Using telnet to debug connection problems

Jan 14, 2018 | bash-prompt.net

Telnet, the protocol and the command line tool, were how system administrators used to log into remote servers. However, due to the fact that there is no encryption all communication, including passwords, are sent in plaintext meant that Telnet was abandoned in favour of SSH almost as soon as SSH was created.

For the purposes of logging into a remote server, you should never, and probably have never considered it. This does not mean that the telnet command is not a very useful tool when used for debugging remote connection problems.

In this guide, we will explore using telnet to answer the all too common question, "Why can't I ###### connect‽".

This frustrated question is usually encountered after installing a application server like a web server, an email server, an ssh server, a Samba server etc, and for some reason, the client won't connect to the server.

telnet isn't going to solve your problem but it will, very quickly, narrow down where you need to start looking to fix your problem.

telnet is a very simple command to use for debugging network related issues and has the syntax:

telnet <hostname or IP> <port>

Because telnet will initially simply establish a connection to the port without sending any data it can be used with almost any protocol including encrypted protocols.

There are four main errors that you will encounter when trying to connect to a problem server. We will look at all four, explore what they mean and look at how you should fix them.

For this guide we will assume that we have just installed a Samba server at samba.example.com and we can't get a local client to connect to the server.

Error 1 - The connection that hangs forever

First, we need to attempt to connect to the Samba server with telnet . This is done with the following command (Samba listens on port 445):

telnet samba.example.com 445

Sometimes, the connection will get to this point stop and hang indefinitely:

telnet samba.example.com 445
Trying 172.31.25.31...

This means that telnet has not received any response to its request to establish a connection. This can happen for two reasons:

  1. There is a router down between you and the server.
  2. There is a firewall dropping your request.

In order to rule out 1. run a quick mtr samba.example.com to the server. If the server is accessible then it's a firewall (note: it's almost always a firewall).

Firstly, check if there are any firewall rules on the server itself with the following command iptables -L -v -n , if there are none then you will get the following output:

iptables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

If you see anything else then this is likely the problem. In order to check, stop iptables for a moment and run telnet samba.example.com 445 again and see if you can connect. If you still can't connect see if your provider and/or office has a firewall in place that is blocking you.

Error 2 - DNS problems

A DNS issue will occur if the hostname you are using does not resolve to an IP address. The error that you will see is as follows:

telnet samba.example.com 445
Server lookup failure:  samba.example.com:445, Name or service not known

The first step here is to substitute the IP address of the server for the hostname. If you can connect to the IP but not the hostname then the problem is the hostname.

This can happen for many reasons (I have seen all of the following):

  1. Is the domain registered? Use whois to find out if it is.
  2. Is the domain expired? Use whois to find out if it is.
  3. Are you using the correct hostname? Use dig or host to ensure that the hostname you are using resolves to the correct IP.
  4. Is your A record correct? Check that you didn't accidentally create an A record for something like smaba.example.com .

Always double check the spelling and the correct hostname (is it samba.example.com or samba1.example.com ) as this will often trip you up especially with long, complicated or foreign hostnames.

Error 3 - The server isn't listening on that port

This error occurs when telnet is able to reach to the server but there is nothing listening on the port you specified. The error looks like this:

telnet samba.example.com 445
Trying 172.31.25.31...
telnet: Unable to connect to remote host: Connection refused

This can happen for a couple of reasons:

  1. Are you sure you're connecting to the right server?
  2. Your application server is not listening on the port you think it is. Check exactly what it's doing by running netstat -plunt on the server and see what port it is, in fact, listening on.
  3. The application server isn't running. This can happen when the application server exits immediately and silently after you start it. Start the server and run ps auxf or systemctl status application.service to check it's running.
Error 4 - The connection was closed by the server

This error happens when the connection was successful but the application server has a built in security measure that killed the connection as soon as it was made. This error looks like:

telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
Connection closed by foreign host.

The last line Connection closed by foreign host. indicates that the connection was actively terminated by the server. In order to fix this, you need to look at the security configuration of the application server to ensure your IP or user is allowed to connect to it.

A successful connection

This is what a successful telnet connection attempt looks like:

telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.

The connection will stay open for a while depending on the timeout of the application server you are connected to.

A telnet connection is closed by typing CTRL+] and then when you see the telnet> prompt, type "quit" and hit ENTER i.e.:

telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Conclusion

There are a lot of reasons that a client application can't connect to a server. The exact reason can be difficult to establish especially when the client is a GUI that offers little or no error information. Using telnet and observing the output will allow you to very rapidly narrow down where the problem lies and save you a whole lot of time.

[Jun 10, 2010] Deep-protocol analysis of UNIX networks

[Jun 08, 2010 | developerWorks
Parsing the raw data to understand the content

Another way to process the content from tcpdump is to save the raw network packet data to a file and then process the file to find and decode the information that you want.

There are a number of modules in different languages that provide functionality for reading and decoding the data captured by tcpdump and snoop. For example, within Perl, there are two modules: Net::SnoopLog (for snoop) and Net::TcpDumpLog (for tcpdump). These will read the raw data content. The basic interfaces for both of these modules is the same.

To start, first you need to create a binary record of the packets going past on the network by writing out the data to a file using either snoop or tcpdump. For this example, we'll use tcpdump and the Net::TcpDumpLog module: $ tcpdump -w packets.raw.

Once you have amassed the network data, you can start to process the network data contents to find the information you want. The Net::TcpDumpLog parses the raw network data saved by tcpdump. Because the data is in it's raw binary format, parsing the information requires processing this binary data. For convenience, another suite of modules, NetPacket::*, provides decoding of the raw data.

For example, Listing 8 shows a simple script that prints out the IP address information for all of the packets.

Listing 8. Simple script that prints out the IP address info for all packets
use Net::TcpDumpLog;
    
use NetPacket::Ethernet;
    
use NetPacket::IP;

    
my $log = Net::TcpDumpLog->new();
 
$log->read("packets.raw");
 
 
foreach my $index ($log->indexes)
       
{
    
    my $packet = $log->data($index);
           

    my $ethernet = NetPacket::Ethernet->decode($packet);

  
    if ($ethernet->{type} == 0x0800)
       
    {
    
        my $ip = NetPacket::IP->decode($ethernet->{data});
          

    
        printf("  %s to %s protocol %s \n",
               $ip->{src_ip},$ip->{dest_ip},$ip->{proto});
   }

} 
The first part is to extract each packet. The Net::TcpDumpLog module serializes each packet, so that we can read each packet by using the packet ID. The data() method then returns the raw data for the entire packet.

As with the output from snoop, we have to extract each of the blocks of data from the raw network packet information. So in this example, we first need to extract the ethernet packet, including the data payload, from the raw network packet. The NetPacket::Ethernet module does this for us.

Since we are looking for IP packets, we can check for IP packets by looking at the Ethernet packet type. IP packets have an ID of 0x0800.

The NetPacket::IP module can then be used to extract the IP information from the data payload of the Ethernet packet. The module provides the source IP, destination IP and protocol information, among others, which we can then print.

Using this basic framework you can perform more complex lookups and decoding that do not rely on the automated solutions provided by tcpdump or snoop. For example, if you suspect that there is HTTP traffic going past on a non-standard port (i.e., not port 80), you could look for the string HTTP on ports other than 80 from the suspected host IP using the script in Listing 9.


Listing 9. Looking for strong HHTP on ports other than 80
use Net::TcpDumpLog;
    
use NetPacket::Ethernet;
    
use NetPacket::IP;
    
use NetPacket::TCP;
    

    
my $log = Net::TcpDumpLog->new();
       
$log->read("packets.raw");
       

    
foreach my $index ($log->indexes)
       
{
    
    my $packet = $log->data($index);
       

    
    my $ethernet = NetPacket::Ethernet->decode($packet);
       

    
    if ($ethernet->{type} == 0x0800)
       
    {
    
        my $ip = NetPacket::IP->decode($ethernet->{data});
          

    
        if ($ip->{src_ip} eq '192.168.0.2')
       
        {
    
            if ($ip->{proto} == 6)
       
            {
    
                my $tcp = NetPacket::TCP->decode($ip->{data});
       
                if (($tcp->{src_port} != 80) &&
               
                    ($tcp->{data} =~ m/HTTP/))
       
                {
    
                    print("Found HTTP traffic on non-port 80\n");
    
                    printf("%s (port: %d) to %s (port: %d)\n%s\n",
    
                           $ip->{src_ip},
       
                           $tcp->{src_port},
       
                           $ip->{dest_ip},
       
                           $tcp->{dest_port},
       
                           $tcp->{data});
 
                }
    
            }
    
        }
    
   }
    
}

Running the above script on a sample packet set returned the following shown in Listing 10.


Listing 10. Running the script on a sample packet set
$ perl http-non80.pl
Found HTTP traffic on non-port 80
192.168.0.2 (port: 39280) to 168.143.162.100 (port: 80)
GET /statuses/user_timeline.json HTTP/1.1
Found HTTP traffic on non-port 80
192.168.0.2 (port: 39282) to 168.143.162.100 (port: 80)
GET /statuses/friends_timeline.json HTTP/1

In this particular case we're seeing traffic from the host to an external website (Twitter).

Obviously, in this example, we are dumping out the raw data, but you could use the same basic structure to decode and the data in any format using any public or proprietary protocol structure. If you are using or developing a protocol using this method, and know the protocol format, you could extract and monitor the data being transferred.

Using a protocol analyzer

Although, as already mentioned, tools like tcpdump, iptrace and snoop provide basic network analysis and decoding, there are GUI-based tools that make the process even easier. Wireshark is one such tool that supports a vast array of network protocol decoding and analysis.

One of the main benefits of Wireshark is that you can capture packets over a period of time (just as with tcpdump) and then interactively analyze and filter the content based on the different protocols, ports and other data. Wireshark also supports a huge array of protocol decoders, enabling you to examine in minute detail the contents of the packets and conversations.

You can see the basic screenshot of Wireshark showing all of the packets of all types being listed in Figure 1. The window is divided into three main sections: the list of filtered packets, the decoded protocol details, and the raw packet data in hex/ASCII format.

[Aug 6, 2009] Xplico 0.5.2

The goal of Xplico is to extract the applications data from an Internet traffic capture. For example, from a pcap file Xplico extracts each email (POP, IMAP, and SMTP protocols), all HTTP contents, each... VoIP call (SIP), and so on. Xplico isn't a packet sniffer or a network protocol analyzer; it's an IP/Internet traffic decoder or network forensic analysis tool (NFAT).

[Jan 2, 2008] vnStat

About:
vnStat is a console-based network traffic monitor that keeps a log of hourly, daily, and monthly network traffic for the selected interface(s). However, it isn't a packet sniffer. The traffic information is analyzed from the /proc filesystem. That way, vnStat can be used even without root permissions.

Release focus: Minor bugfixes

Changes:
This release fixes a bug that caused a segmentation fault if the environment variable "HOME" wasn't defined, which in turn caused most PHP/CGI scripts using vnStat to malfunction. Some minor feature enhancements are also included.

[Apr 15, 2007] freshmeat.net Project details for Tcpreplay by Aaron Turner

Tcpreplay 3.0.RC1 (stable)

This release improves OpenBSD, HP-UX, Cygwin/Win32, x86_64, and little endian support. Enhancements were made to allow editing packets with tcpreplay. libpcap detection was improved.

[Mar 24, 2007] freshmeat.net Project details for Tcpreplay

Tcpreplay 3.0.beta13 released

Tcpreplay is a set of Unix tools which allows the editing and replaying of captured network traffic in pcap (tcpdump) format. It can be used to test a variety of passive and inline network devices, including IPS's, UTM's, routers, firewalls, and NIDS.

Release focus: Major bugfixes

Changes:
This release fixes some serious regression bugs that prevented tcprewrite from editing most packets on Intel and other little-endian systems. Some smaller bugfixes and tweaks to improve replay performance were made.

Author:
Aaron Turner [contact developer]

[Mar 3rd 2006 ] freshmeat.net Project details for netrw by Jiri Denemark

netrw 1.3.1

About: netrw is a simple (but powerful) tool for transporting data over the Internet. Its main purpose is to simplify and speed up file transfers to hosts without an FTP server. It can also be used for uploading data to some other user. It is something like one-way netcat (nc) with some nice features concerning data transfers. It can compute and check message digest (MD5, SHA-1, and some others) of all the data being transferred. It can also print information on progress and average speed. At the end, it sums up the transfer.

Changes: A bug causing netread to sometimes end up in an endless loop after receiving all data was fixed.



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October, 15, 2018