please show Infiniband devices also..
:: Code :: [r**t@bigpingu01 ~]# inxi -nN
Network: Card-1: Intel 82576 Gigabit Network Connection driver: igb IF: eth0 state: up speed: 1000 Mbps duplex: full mac: 00:25:90:4e:9a:40 Card-2: Intel 82576 Gigabit Network Connection driver: igb IF: eth1 state: down mac: 00:25:90:4e:9a:41 [r**t@bigpingu01 ~]# dmesg | grep Mell mlx4_core: Mellanox ConnectX core driver v1.1 (Dec, 2011) mlx4_en: Mellanox ConnectX HCA Ethernet driver v2.0 (Dec 2011) <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v1.0 (April 4, 2008) [r**t@bigpingu01 ~]# ip a l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:25:90:4e:9a:40 brd ff:ff:ff:ff:ff:ff inet6 fe80::225:90ff:fe4e:9a40/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:25:90:4e:9a:41 brd ff:ff:ff:ff:ff:ff 4: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast state UP qlen 256 link/infiniband 80:00:00:48:fe:80:00:00:00:00:00:00:00:25:90:ff:ff:07:c6:a9 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff inet 172.22.44.201/24 brd 172.22.44.255 scope global ib0 inet6 fe80::225:90ff:ff07:c6a9/64 scope link valid_lft forever preferred_lft forever 5: ib1: <BROADCAST,MULTICAST> mtu 4092 qdisc noop state DOWN qlen 256 link/infiniband 80:00:00:49:fe:80:00:00:00:00:00:00:00:25:90:ff:ff:07:c6:aa brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 7: eth0.228@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:25:90:4e:9a:40 brd ff:ff:ff:ff:ff:ff inet 172.22.28.46/24 brd 172.22.28.255 scope global eth0.228 inet6 fe80::225:90ff:fe4e:9a40/64 scope link valid_lft forever preferred_lft forever 8: ib0.8100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc pfifo_fast state UP qlen 256 link/infiniband 80:00:00:4c:fe:80:00:00:00:00:00:00:00:25:90:ff:ff:07:c6:a9 brd 00:ff:ff:ff:ff:12:40:1b:81:00:00:00:00:00:00:00:ff:ff:ff:ff inet 10.10.100.1/24 brd 10.10.100.255 scope global ib0.8100 inet6 fe80::225:90ff:ff07:c6a9/64 scope link valid_lft forever preferred_lft forever 9: ib0.8101: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast state UP qlen 256 link/infiniband 80:00:00:4d:fe:80:00:00:00:00:00:00:00:25:90:ff:ff:07:c6:a9 brd 00:ff:ff:ff:ff:12:40:1b:81:01:00:00:00:00:00:00:ff:ff:ff:ff inet 10.10.101.1/24 brd 10.10.101.255 scope global ib0.8101 inet6 fe80::225:90ff:ff07:c6a9/64 scope link valid_lft forever preferred_lft forever 10: ib0.8110: <BROADCAST,MULTICAST> mtu 4092 qdisc pfifo_fast state DOWN qlen 256 link/infiniband 80:00:00:4e:fe:80:00:00:00:00:00:00:00:25:90:ff:ff:07:c6:a9 brd 00:ff:ff:ff:ff:12:40:1b:81:10:00:00:00:00:00:00:ff:ff:ff:ff inet 10.10.110.1/24 brd 10.10.110.255 scope global ib0.8110 Back to top |
|||||
this is not the place to file inxi issue reports.
code.google.com/p/inxi/issues/list I'm not familiar with infiniband so you will have to explain what it is and why it should be included. Also, run, with a fully upgraded inxi (2.1.28) this command: inxi -xx@ 14 which will upload all the system info I need to actually implement a feature. Back to top |
|||||
InfiniBand is a switched fabric computer network communications link used in high-performance computing and enterprise data centers. Why we would like to see the Infiniband interfaces with 'inxi' is to easily identify devices and create kickstart scripts for auto deployment of server nodes. It is not critical to be able to see the Infiniband devices if it's to much trouble. We just thought that it would be nice to have. We'll be using 'inxi' anyway.
Thanks for the reply Back to top |
|||||
I didn't say it was too much trouble, I just wanted to know why the feature was wanted, I take the simple position that if an inxi user can show a valid reason for a feature being added, which you have done now, it will be added if it's feasible.
I'll still need the: inxi -xx@14 data in order to implement the feature, that gives me all the working files about sys info that I can plug into the logic to emulate the systems and features. I see one debugger upload, If this is your data, let me know: inxi-bigping.... Update: looks like it's the right one, thanks, I'll take a look. Always happy to support real servers and the needs of real sys admins with inxi. Thanks. Back to top |
|||||
I can get tentative but not perfect support.
to test this, update inxi like this: inxi -! 11 this will update it from branches/one inxi, which has an attempt to add infiniband support. It's too complicated for me to emulate this locally unfortunately. Then test it locally like this: inxi -Iixxx -I shows what should be 2.1.28-1-b1 to make sure you updated the right version. You should now with inxi -nxxx see most of the infiniband data. And I think with -ixxx you should see the networking data. Note that I believe -i would have shown this no matter what already, unless I'm mistaken. note that I do not attempt to link the output of the -i lines with the -n lines, it's too complicated, so those two things are not connected internally. Because your sample shows two ids, ib0 and ib1 on the same busid, inxi has to dump the second and more from that to get the rest of the data, so as is now, raw, you will only get card id from the first detected id, ib0, otherwise it won't work. I can't guarantee this will work because I can't test it, but it's a start. Post: inxi -ixxxIz here, to hide the nic ids etc, after confirming that the mac address etc appears as expected. inxi uses lspci -v to get the busid, then it uses the busid to query /sys to get the other data about the card, like speed, duplex (these two don't seem present on infiniband, duplex I can see why not), mac address, and state. See how that goes. Back to top |
|||||
This is great! Thanks a lot! The IB adapter has two ports and is probably the reason why ib0 is detected first. Port ib0 is up and ib1 is down.
:: Code :: [r**t@bigpingu01 etc]# inxi --version
inxi 2.1.28-01-b1 (2014-07-21) ## Here's the output from: inxi -ixxxlz [r**t@bigpingu01 etc]# inxi -ixxxlz Network: Card-1: Mellanox MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] driver: mlx4_core v: 1.1 bus-ID: 02:00.0 chip-ID: 15b3:673c IF: ib0 ib1 state: N/A speed: N/A duplex: N/A mac: N/A Card-2: Intel 82576 Gigabit Network Connection driver: igb v: 4.0.1-k ports: ec00 Root bus-ID: 05:00.0 chip-ID: 8086:10e7 IF: eth0 state: up speed: 1000 Mbps duplex: full mac: <filter> Card-3: Intel 82576 Gigabit Network Connection driver: igb v: 4.0.1-k ports: e880 Root bus-ID: 05:00.1 chip-ID: 8086:10e7 IF: eth1 state: down mac: <filter> WAN IP: <filter> IF: bond0 ip: N/A ip-v6: N/A IF: ib0 ip: <filter> ip-v6: <filter> IF: ib1 ip: N/A ip-v6: N/A IF: eth0 ip: N/A ip-v6: <filter> IF: eth1 ip: N/A ip-v6: N/A IF: ib0.8100 ip: <filter> ip-v6: <filter> IF: eth0.228@eth0 ip: <filter> ip-v6: <filter> IF: ib0.8110 ip: <filter> ip-v6: N/A IF: ib0.8101 ip: <filter> ip-v6: <filter> Partition: ID-1: / size: 50G used: 16G (34%) fs: ext4 dev: /dev/dm-0 label: N/A ID-2: /boot size: 485M used: 54M (12%) fs: ext4 dev: /dev/sda1 label: N/A ID-3: /home size: 405G used: 33G (9%) fs: ext4 dev: /dev/dm-2 label: N/A ID-4: swap-1 size: 16.90GB used: 0.00GB (0%) fs: swap dev: /dev/dm-1 label: N/A Back to top |
|||||
Looks like I was close, but didn't quite get the extra data for the ib0, and there's a line break that needs to be cleaned out.
I'll upload a new branches version today to see if I can get that handled. There is a way to also get the two ports on the ib device handled, but it's tricky, but may be worth trying, I'll see if I have time. Sorry for lack of clarity, I wanted -l (Eye) not el, lol, but it's impossible to read them, but your output is better since all I really want to make sure of is that you got the right branch version, I'll ask for how you gave the data instead in the future since it is not ambiguous, ie: inxi --version inxi <new features> Back to top |
|||||
Update: inxi -! 11
then try it again, this is 2.1.28-02-b1 This version should correct the failed id, and should fill in the id type data like state, mac address, etc. Also, and this will handle all devices that may report more than one id per busid, it will now, when advanced mode is used -n/-i, also show one full output sequence each id as well (ie, one for ib0, one for ib1, with state up for ib0, state down for ib1, and so on). That wasn't as hard as I thought it would be to do. Please confirm it's working with output, and once I see it's working, I'll put this into new stable. The good thing about this update is that it handles all possible scenarios with more than one network port per pcibusid device, not just infiniband. Not for usb, but that's fine. Back to top |
|||||
To cut a long story short the mac adress (Port GUID) of a IB port has 20 octects which differs from an ethernet port which has six octects. When using IPoIB the IP6 adress is based on this Port GUID.
# ‘MAC addresses are truncated for Infiniband’ (Puppet Labs) shows a resolved issue. https://projects.puppetlabs.com/issues/1415 # IB ports illustrated :: Code ::
[r**t@bigpingu01 ~]# ibstat CA 'mlx4_0' CA type: MT26428 Number of ports: 2 Firmware version: 2.9.1000 Hardware version: b0 Node GUID: 0x002590ffff07c6a8 System image GUID: 0x002590ffff07c6ab Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 23 LMC: 0 SM lid: 16 Capability mask: 0x02510868 Port GUID: 0x002590ffff07c6a9 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 40 Base lid: 25 LMC: 0 SM lid: 16 Capability mask: 0x02510868 Port GUID: 0x002590ffff07c6aa Link layer: InfiniBand [r**t@bigpingu01 ~]# ibv_devinfo hca_id: mlx4_0 transport: InfiniBand (0) fw_ver: 2.9.1000 node_guid: 0025:90ff:ff07:c6a8 sys_image_guid: 0025:90ff:ff07:c6ab vendor_id: 0x02c9 vendor_part_id: 26428 hw_ver: 0xB0 board_id: SM_1081010B01 phys_port_cnt: 2 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 16 port_lid: 23 port_lmc: 0x00 link_layer: InfiniBand port: 2 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 16 port_lid: 25 port_lmc: 0x00 link_layer: InfiniBand [r**t@bigpingu01 ~]# ibstatus Infiniband device 'mlx4_0' port 1 status: default gid: fe80:0000:0000:0000:0025:90ff:ff07:c6a9 base lid: 0x17 sm lid: 0x10 state: 4: ACTIVE phys state: 5: LinkUp rate: 40 Gb/sec (4X QDR) link_layer: InfiniBand Infiniband device 'mlx4_0' port 2 status: default gid: fe80:0000:0000:0000:0025:90ff:ff07:c6aa base lid: 0x19 sm lid: 0x10 state: 4: ACTIVE phys state: 5: LinkUp rate: 40 Gb/sec (4X QDR) link_layer: InfiniBand :: Code ::
# [b]output from inxi[/b] [r**t@bigpingu01 ~]# inxi --version inxi 2.1.28-02-b1 (2014-07-22) [r**t@bigpingu01 ~]# inxi -Nx Network: Card-1: Mellanox MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] driver: mlx4_core v: 1.1 bus-ID: 02:00.0 Card-2: Intel 82576 Gigabit Network Connection driver: igb v: 4.0.1-k ports: ec00 Root bus-ID: 05:00.0 Card-3: Intel 82576 Gigabit Network Connection driver: igb v: 4.0.1-k ports: e880 Root bus-ID: 05:00.1 [r**t@bigpingu01 ~]# inxi -ixxxIz Network: Card-1: Mellanox MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] driver: mlx4_core v: 1.1 bus-ID: 02:00.0 chip-ID: 15b3:673c IF: ib0 ib1 state: N/A speed: N/A duplex: N/A mac: N/A Card-2: Intel 82576 Gigabit Network Connection driver: igb v: 4.0.1-k ports: ec00 Root bus-ID: 05:00.0 chip-ID: 8086:10e7 IF: eth0 state: up speed: 1000 Mbps duplex: full mac: <filter> Card-3: Intel 82576 Gigabit Network Connection driver: igb v: 4.0.1-k ports: e880 Root bus-ID: 05:00.1 chip-ID: 8086:10e7 IF: eth1 state: down mac: <filter> WAN IP: <filter> IF: bond0 ip: N/A ip-v6: N/A IF: ib0 ip: <filter> ip-v6: <filter> IF: ib1 ip: N/A ip-v6: N/A IF: eth0 ip: N/A ip-v6: <filter> IF: eth1 ip: N/A ip-v6: N/A IF: ib0.8100 ip: <filter> ip-v6: <filter> IF: eth0.228@eth0 ip: <filter> ip-v6: <filter> IF: ib0.8110 ip: <filter> ip-v6: N/A IF: ib0.8101 ip: <filter> ip-v6: <filter> Info: Processes: 449 Uptime: 169 days Memory: 1568.5/32099.3MB Init: Upstart v: 0.6.5 runlevel: 3 default: 3 Gcc sys: 4.4.7 Client: Shell (bash 4.1.21 running in tty 1) inxi: 2.1.28-2-b1 [r**t@bigpingu01 ~]# Back to top |
|||||
I thought I had it working, oh well.
I'll review the fix again to see why it didn't work. Basically each infiniband port once this works will show as a separate card/device in inxi output, there's no other way to do it. And the state/mac address will be filled out correctly, I thought I had it working but I guess not. Back to top |
|||||
All times are GMT - 8 Hours
|