Page: 1, 2  Next

New inxi user, found disk & mem summary bugs
techtoys
Status: Curious
Joined: 28 Oct 2016
Posts: 5
Reply Quote
Hi,
I'm a new user of inxi, and agree with all the folks who say "I can't believe I didn't know this was out there!"
It's a pretty awesome tool. I'm putting it through its paces to replace lshw in our environment, and have spotted a few issues so far, even though it's much better than lshw in almost every way. :)

1. On a server with 8, 4G sticks of ram, inxi correctly detects the number and size of the sticks, but then says the total is double what is actually present:
:: Code ::

[root@backup1 tmp]# ./inxi -m
Memory:    Array-1 capacity: 63.75 GB devices: 8 EC: Multi-bit ECC
           Device-1: DIMM1 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>
           Device-2: DIMM2 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>
           Device-3: DIMM3 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>
           Device-4: DIMM4 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>
           Device-5: DIMM5 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>
           Device-6: DIMM6 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>
           Device-7: DIMM7 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>
           Device-8: DIMM8 size: 4 GB speed: 667 MHz (1.5 ns) type: <OUT OF SPEC>

2. On the same server which has several drives attached, the HDD space total doesn't
match the total of the reported drives:
:: Code ::

[root@backup1 tmp]# ./inxi -Fp
System:    Host: backup1 Kernel: 2.6.18-308.4.1.el5 x86_64 (64 bit) Console: tty 0
           Distro: Red Hat Enterprise Linux Server release 5.3 (Tikanga)
Machine:   Device: Rack Mount Chassis System: Dell product: PowerEdge 1950 serial: 7YY1MF1
           Mobo: Dell model: 0UR033 serial: ..CN697028140738. BIOS: Dell v: 2.6.1 rv 2.6 date: 04/20/2009
CPU(s):    2 Quad core Intel Xeon E5345s (-HT-MCP-SMP-) cache: 8192 KB
           clock speeds: max: 2327 MHz 1: 2327 MHz 2: 2327 MHz 3: 2327 MHz 4: 2327 MHz 5: 2327 MHz 6: 2327 MHz
           7: 2327 MHz 8: 2327 MHz
Graphics:  Card: ATI ES1000
           Display Server: X.org 7.1.1 drivers: ati,radeon tty size: 177x97 Advanced Data: N/A for root out of X
Network:   Card-1: Broadcom NetXtreme II BCM5708 Gigabit Ethernet
           IF: N/A state: N/A speed: N/A duplex: N/A mac: N/A
           Card-2: Broadcom NetXtreme II BCM5708 Gigabit Ethernet
           IF: N/A state: N/A speed: N/A duplex: N/A mac: N/A
Drives:    HDD Total Size: 25498.9GB (80.4% used) ID-1: /dev/sda model: PERC_5/i size: 146.2GB
           ID-2: /dev/sdb model: Universal_Xport size: 0.0GB ID-3: /dev/sdc model: Universal_Xport size: 0.0GB
           ID-4: /dev/sdd model: MD_Virtual_Disk size: 1796.8GB
           ID-5: /dev/sde model: MD_Virtual_Disk size: 2186.1GB
           ID-6: /dev/sdf model: MD_Virtual_Disk size: 2186.1GB
           ID-7: /dev/sdg model: MD_Virtual_Disk size: 626.1GB
           ID-8: /dev/sdh model: MD_Virtual_Disk size: 2186.1GB
           ID-9: /dev/sdi model: MD_Virtual_Disk size: 2186.1GB
           ID-10: /dev/sdj model: MD_Virtual_Disk size: 2186.1GB
           ID-11: /dev/sdk model: MD_Virtual_Disk size: 2186.1GB
           ID-12: /dev/sdl model: MD_Virtual_Disk size: 2186.1GB
           ID-13: /dev/sdm model: MD_Virtual_Disk size: 1068.5GB
           ID-14: /dev/sdn model: MD_Virtual_Disk size: 2186.1GB
           ID-15: /dev/sdo model: MD_Virtual_Disk size: 2186.1GB
           ID-16: /dev/sdp model: MD_Virtual_Disk size: 2186.1GB
           ID-17: /dev/sdq model: MD_Virtual_Disk size: 2186.1GB
           ID-18: /dev/sdr model: MD_Virtual_Disk size: 2186.1GB
           ID-19: /dev/sds model: MD_Virtual_Disk size: 2186.1GB
           ID-20: /dev/sdt model: MD_Virtual_Disk size: 2186.1GB
           ID-21: /dev/sdu model: MD_Virtual_Disk size: 2186.1GB
           ID-22: /dev/sdv model: MD_Virtual_Disk size: 2186.1GB
           ID-23: /dev/sdw model: MD_Virtual_Disk size: 2186.1GB
           ID-24: /dev/sdx model: MD_Virtual_Disk size: 2186.1GB
           ID-25: /dev/sdy model: MD_Virtual_Disk size: 1950.7GB
Partition: ID-1: / size: 4.8G used: 4.0G (88%) fs: ext3 dev: /dev/sda1
           ID-2: /integral size: 105G used: 37G (37%) fs: ext3 dev: /dev/sda7
           ID-3: /var size: 2.9G used: 634M (24%) fs: ext3 dev: /dev/sda6
           ID-4: /home size: 4.8G used: 2.6G (58%) fs: ext3 dev: /dev/sda5
           ID-5: /tmp size: 7.6G used: 449M (7%) fs: ext3 dev: /dev/sda3
           ID-6: /data size: 41T used: 19T (46%) fs: xfs dev: /dev/mapper/vg_data-lv_data
           ID-7: swap-1 size: 8.39GB used: 0.00GB (0%) fs: swap dev: /dev/sda2
RAID:      No RAID devices: /proc/mdstat, md_mod kernel module present
Sensors:   System Temperatures: cpu: 40C mobo: N/A
           Fan Speeds (in rpm): cpu: N/A
Info:      Processes: 219 Uptime: 615 days Memory: 8978.0/32168.0MB Init: SysVinit runlevel: 3
           Client: Shell (bash) inxi: 2.3.3
[root@backup1 tmp]# vgscan
  Reading all physical volumes.  This may take a while...
  /dev/cdrom: open failed: Read-only file system
  Attempt to close device '/dev/cdrom' which is not open.
  Found volume group "vg_data" using metadata type lvm2
[root@backup1 tmp]# lvscan
  ACTIVE            '/dev/vg_data/lv_data' [40.74 TB] inherit

What additional data, if any is needed to debug these items?

Thanks.
Back to top
Here's another example of the errors
techtoys
Status: Curious
Joined: 28 Oct 2016
Posts: 5
Reply Quote
In this case, the server has 16G of ram, which is being reported as 128G, and 140+ TB of disk space, which is being reported as 276TB.
:: Code ::

[root@backup ~]# df -h
-bash: fc: history specification out of range
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/sysvg-lv01
                       16G  5.3G  9.7G  36% /
tmpfs                 7.8G   12K  7.8G   1% /dev/shm
/dev/sda1            1008M  159M  799M  17% /boot
/dev/mapper/sysvg-lv04
                       16G  7.4G  7.6G  50% /home
/dev/mapper/sysvg-lv05
                       79G   29G   46G  39% /integral
/dev/mapper/sysvg-lv06
                       79G  6.9G   68G  10% /logs
/dev/mapper/sysvg-lv03
                      5.0G  207M  4.5G   5% /tmp
/dev/mapper/sysvg-lv02
                      260G  112G  135G  46% /var
/dev/mapper/datavg-lvdata
                      126T  120T  5.7T  96% /data
-bash: fc: history specification out of range
[root@backup ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "sysvg" using metadata type lvm2
  Found volume group "datavg" using metadata type lvm2
[root@backup ~]# lvscan
  ACTIVE            '/dev/sysvg/lv00' [4.00 GiB] inherit
  ACTIVE            '/dev/sysvg/lv01' [16.00 GiB] inherit
  ACTIVE            '/dev/sysvg/lv02' [263.25 GiB] inherit
  ACTIVE            '/dev/sysvg/lv03' [5.00 GiB] inherit
  ACTIVE            '/dev/sysvg/lv04' [16.00 GiB] inherit
  ACTIVE            '/dev/sysvg/lv05' [80.00 GiB] inherit
  ACTIVE            '/dev/sysvg/lv06' [80.00 GiB] inherit
  ACTIVE            '/dev/datavg/lvdata' [125.51 TiB] inherit
[root@backup ~]# /var/tmp/inxi -Fmp
System:    Host: backup Kernel: 2.6.32-358.6.2.el6.x86_64 x86_64 (64 bit) Console: tty 0
           Distro: CentOS release 6.4 (Final)
Machine:   Device: server System: Dell product: PowerEdge R510 serial:
           Mobo: Dell model: 0DPRKF v: A06 serial: ..CN1374024G008V. BIOS: Dell v: 1.12.0 date: 07/26/2013
CPU:       Quad core Intel Xeon E5606 (-HT-MCP-) cache: 8192 KB
           clock speeds: max: 2133 MHz 1: 2133 MHz 2: 2133 MHz 3: 2133 MHz 4: 2133 MHz
Memory:    Array-1 capacity: 128 GB devices: 8 EC: Multi-bit ECC
           Device-1: DIMM_A1 size: 8 GB speed: 1333 MHz type: DDR3
           Device-2: DIMM_A2 size: 8 GB speed: 1333 MHz type: DDR3
           Device-3: DIMM_A3 size: No Module Installed type: DDR3
           Device-4: DIMM_A4 size: No Module Installed type: DDR3
           Device-5: DIMM_B1 size: No Module Installed type: DDR3
           Device-6: DIMM_B2 size: No Module Installed type: DDR3
           Device-7: DIMM_B3 size: No Module Installed type: DDR3
           Device-8: DIMM_B4 size: No Module Installed type: DDR3


Drives:    HDD Total Size: 276503.4GB (47.7% used) ID-1: /dev/sda model: Virtual_Disk size: 499.6GB
           ID-2: /dev/sdd model: MD32xx size: 69001.0GB ID-3: /dev/sdb model: MD32xx size: 69001.0GB
           ID-4: /dev/sdc model: MD32xx size: 69001.0GB ID-5: /dev/sde model: MD32xx size: 69001.0GB
Partition: ID-1: / size: 16G used: 5.3G (36%) fs: ext4 dev: /dev/dm-3
           ID-2: /boot size: 1008M used: 159M (17%) fs: ext3 dev: /dev/sda1
           ID-3: /home size: 16G used: 7.4G (50%) fs: ext4 dev: /dev/dm-6
           ID-4: /integral size: 79G used: 29G (39%) fs: ext4 dev: /dev/dm-7
           ID-5: /logs size: 79G used: 7.0G (10%) fs: ext4 dev: /dev/dm-8
           ID-6: /tmp size: 5.0G used: 207M (5%) fs: ext4 dev: /dev/dm-5
           ID-7: /var size: 260G used: 112G (46%) fs: ext4 dev: /dev/dm-4
           ID-8: /data size: 126T used: 120T (96%) fs: xfs dev: /dev/dm-9
           ID-9: swap-1 size: 4.29GB used: 0.08GB (2%) fs: swap dev: /dev/dm-2

Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4127
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
Well, you've found the two weak spots in inxi, lol, both are known.

As an aside, I waited literally a few years for the ram data to be put into /sys, like most of the rest of system data has been, but it never happened, so I finally broke down and grudgingly used dmidecode, which of course requires root to run, something I really try to avoid if possible. I still have hopes that one day the ram full data will be entered into /sys so it can be more readily read and processed by inxi without having to use dmidecode, but I didn't want to wait forever, plus, of course, dmidecode gives me basically instant and automatic bsd support, so it's always fine to have it, but ideally always as a second fallback case, not primary. Note that having it in /sys doesn't fix the bad vendor supplied data issue, but it does solve the dmidecode dependency and root requirements.

I'll explain the ram issue first: inxi gets ram data, unfortunately, from dmidecode. dmidecode itself gets the data from basically two places, 1 is very reliable, that's the actually installed memory, its real capacity, it's size, its speed, its type, all that is good data, and actually appears to come right from the ram.

This is why you see reliable per stick data.

Now, the sad, dismal second part, which is what made the ram feature take a month to get even close to stable, with many 10s of sample datasets, is data that the vendor randomly types into some input boxes in their dmi info, this is often just randomly copy and pasted between devices, with almost no care or thought to its accuracy or correctness, that is, this is human input, NOT based on the actual capacities.

This is where the full ram capacity comes from, for example. Now, while inxi will attempt to fix this junk data, and is actually quite a bit more reliable than dmidecode itself, if the information is logically possible, yet wrong, inxi can't really correct it.

Here's an example: system reports 4 gB of capacity, yet 8gB are installed and reporting per stick correctly. In this case, inxi ignores the self reported capacity, and bumps it up to 8gB. It then adds (check) to the output to let you know it's trying to figure out the capacity using logic). This doesn't always work, for example, a system could have an 8 gB per slot max capacity, with 4 slots, but only a 16 gB max capacity. In this case, inxi would guess that the capacity, if 2 sticks used at 8 gb each, it would then, incorrectly say, 32 gB capacity for the overall ram, with (check) to let you know it's interpolated data.

Now, that's the case where inxi is either right and more accurate than the dmi data reported, or trying to be right, but actually wrong but still more right than the dmi data.

The other case is this:

system reports say, 64 gb capacity, and has 4 slots with 8gb in each slot, so inxi would say, fine, I have no logical reason to disbelieve this, there is no logical contradiction. But the dmi data itself is in fact wrong, and maybe it has only support for 32 gb. Since inxi is trusting the data and not correcting it, it does not show (check) because it all seems ok.

The amazing sloppiness of core kernel and hardware data reporting (not a fault of the kernel, but of the hardware vendors, who fill this data in) are a big reason inxi is so darned long and complicated.

So in this case, if you run, as both root and then not root: inxi -xx@14
it will upload a hardware info data dump from the system in question, which let me see if inxi has a bug, or, usually the case, simply incorrect data is in the dmi output. If no logic can accurately or safely verify error, inxi has to use the data its given, and trust it, but I believe I put in the man page , to NEVER trust the actual ram full capacity reported, or per slot capacity, those are both vendor supplied, and quite random, essentially works of neo-realist fiction in terms of their real factual validity. Basically what inxi provides there is the very best guess it can without being blatantly wrong, for example, if you have 32 gb installed, and system reports 16 gb capacity, which is common, inxi dumps the system reported overall capacity and uses the calculated capacity instead, which is closer to reality.

2. now, for the second, hard disk capacity, that's a real pain, I'm completely aware of this issue, but it's a monster to really handle, for example, if the system is using software raid, it has to also calculate / discover which drives are used for raid, and how many, and what kind of raid, etc, very painful logic, I was thinking about trying to get that accurate, but gave up because it was simply too daunting. So instead I went with the less error prone direct reporting of mounted partition sizes, which of course has the obvious bug of ignoring the unmounted drives.

And if you do run software raid, well, I'll show you a sample:

:: Code ::

Drives:    HDD Total Size: 8501.7GB (18.0% used)
           ID-1: /dev/sdc model: WDC_WD2002FAEX size: 2000.4GB
           ID-2: /dev/sdb model: WDC_WD2002FAEX size: 2000.4GB
           ID-3: /dev/sda model: WDC_WD5001AALS size: 500.1GB
           ID-4: /dev/sdd model: WDC_WD2002FAEX size: 2000.4GB
           ID-5: /dev/sde model: WDC_WD2002FAEX size: 2000.4GB
           Optical: /dev/sr0 model: N/A rev: N/A dev-links: cdrom,cdrw,dvd,dvdrw,scd0
           Features: speed: 48x multisession: yes audio: yes dvd: yes rw: cd-r,cd-rw,dvd-r,dvd-ram state: N/A
Partition: ID-1: / size: 14G used: 5.0G (37%) fs: ext3 dev: /dev/sda2
           label: main-root-1
           ID-2: /home size: 37G used: 1.7G (5%) fs: ext3 dev: /dev/sda3
           label: main-home-1
           ID-3: /drives size: 1.8T used: 1.4T (82%) fs: ext3 dev: /dev/md0
           label: data-drives
           ID-4: swap-1 size: 1.00GB used: 0.02GB (2%) fs: swap dev: /dev/sda1
           label: main-swap-1
RAID:      System: supported: raid1
           Device-1: /dev/md0 - active components: online: sdc1[0] sdb1[1]
           Info: raid: 1 report: 2/2 UU blocks: 1953510841 chunk size: N/A super blocks: 1.2
           Unused Devices: none
Unmounted: ID-1: /dev/sdd1 size: 2000.40G fs: ext3 label: data-sync-1
           ID-2: /dev/sde1 size: 2000.40G fs: ext3 label: main-backups-1


Here you see a system with 2x2tb raid, 2 unmounted 2tb partitions, and the sadly inaccurate disk use report of 18.0%,which is simply wrong. However, as with the ram, while the overall capacity estimate can be pretty far removed from reality, the per partition data is completely reliable.

The problem here is many fold:

1. I can't easily get the used percent of unmounted partitions, and certainly not without root, that's a tough one.

2. handling raid means the disk logic, which runs technically before the raid logic, which is itself quite inadequate because it only supports zfs and mdraid, would need all the raid partition data, and would then need to know the raid type, used vs backup partitions, etc, all of which I considered so daunting a task that I honestly just gave up on it.

3. in the above example, for example, / and /home are on a 500 gB disk, which doesn't use any other part of itself, but the rest isn't a partition, and isn't of course mounted. So that 450 gB roughly is just ignored, yet forms a part of the overall reported disk capacity. Technically speaking, if there were no unmounted partitions and no software raid, that unmounted and non partitioned data would actually be correctly handled.

The above is a nice example.

Other bad things:

1. mount a 1 tb usb type drive, then the inxi report happily includes that, since it has no way to distinguish between a temp plugged in external drive and a permanently attached external drive.

So that's not super reliable.

The basic compromise I made with the disk capacity was this: for a standard user system, with no raid, and no unmounted partitions, the disk percent used is exactly correct, perfect. It goes downhill from there, but since that actually covers probably maybe 75%? of user systems, I figured, ok, 75% accurate is not too bad. Ideally it would be closer to 95%, but tis' difficult, as noted.

Specific errors that make sense can be patched, but I have to be very careful because I'd rather inxi was usually right in most cases, or at least more or less, than to be wrong in many and right in a few.

sensors, by the way, is similar junk data, another feature that took almost a month to get to the desired state of 'good enough'. The first version of -s, sensors, on my local dev system, took me all of 2 or 3 hours to get running, then, sadly, the user datasets poured in, and I realized my fantasy of reliable data was just that, a fantasy, so now -s, like ram data, is largely synthesized into a best guess of what the system actually means.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4127
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
By the way, because inxi did not display the (check) qualifier after capacity data, you can fairly safely assume that the dmi data itself gave that overall system capacity, which does not contradict the installed slot data, so inxi has no choice but to use it. So I believe there it's not really a solvable issue.

The hard disk data is a different case, but it's just so complicated that I honestly gave up there, the case I showed you is a fine example of what it would be nice for inxi to be able to accurately handle, but with raid, unmounted partitions, etc, it's quite complicated.

I have a faint memory of being sufficiently dissatisfied with the inxi -R raid handling, since it only handles mdraid and zfs, not btrfs, not any other software raids, that I didn't consider that a good enough tool to actually try to reliablly resolve the wrong hard disk used percents, so I just left it as is.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4127
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
Oh, by the way, there are often easy partition size missing data fixes for inxi, those are often caused by unknown mount points or a different syntax used, so that inxi simply doesn't 'see' the partition as a valid one. That one is really easy to fix, inxi just uses whitelists for partition data, so it's usually just a matter of seeing the relevant inxi data dump and adding to the whitelists, that usually fixes any failures right away, at least for missed partitions etc.

That's the most frequent thing I update for issue reports, new user configurations, new /dev names, etc, some weird unknown syntax for something, like lvm stuff was at one point, etc.

That's an easy issue to handle in most cases.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4127
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
the second example disk size may be a math error, I'd have to check, the TB support was added more in theory than based on disks that large back when I created that logic.

I'd have to see the raw data dump to be sure, that one is probably fixable however I think.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4127
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
I've taken a quick look but without the debugger data I can't go much further.

As noted, the capacity data for ram is generally not fixable by inxi, that's usually a system error, so without the full debugger dataset I can't do anything there.

For the hard disk capacity, I checked the code, but again, without the debugger data, I can't go very far since I have to see the exact data inxi used.

:: Code ::
df -P -T --exclude-type=aufs --exclude-type=devfs --exclude-type=devtmpfs
      --exclude-type=fdescfs --exclude-type=iso9660 --exclude-type=linprocfs --exclude-type=nfs
      --exclude-type=nfs3 --exclude-type=nfs4 --exclude-type=nfs5 --exclude-type=procfs  --exclude-type=smbfs
      --exclude-type=squashfs --exclude-type=sysfs --exclude-type=tmpfs --exclude-type=unionfs


Note that without any requested data, I'm not going to spend time trying to pull data out bit by bit since that takes me about 20x at least longer than using the actual data, so I'm leaving this up to you to complete this issue with requested datasets.

inxi uses a lot of different data queries to build its partition and drive summaries, so until I get a data set, I'm leaving this as a 'probably has some fixable issues but won't know until I get the full data I need'.

In theory inxi should not have any issues with the /dev/mapper since that's already handled, but I won't know until I see the actual data the system generated during inxi's operation.

There's a few places that the disk capacity error could have come in, but I can't know without the real data.

Since my time has some value, I won't spend any more time on this issue until I get the requested full datasets.
Back to top
Sorry for the radio silence... Data uploaded.
techtoys
Status: Curious
Joined: 28 Oct 2016
Posts: 5
Reply Quote
I don't know what happened that I didn't notice your quick and very detailed responses ( I should've expected that the developer would be as helpful and awesome as the tool but sadly that is often not the case...), but I've seen them now.

Thank you, and they make total sense. I've seen the sloppiness first hand, many times.

I've uploaded the data requested, and wow, that's totally slick! I thought I was going to have to attach the data here, but see inxi collects and uploads the data all automatically.

The system in question is a Dell R510 attached to an external storage array using a SAS HBA.
In the time since the last email from me (again apologies; will set a watch on this thread that
goes directly to my monitored inbox here at work).

The storage array presents 2 69000G virtual block devices over two different SAS paths so it actually
appears that 4 69000G devices are connected, and a SAS multipath config on the server lets it recognize
that actually only 2 devices are connected over 2 paths and then the system manages the disk access
correctly. Without the multipath config enabled, the volumes are not accessible.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4127
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
I actually just rewrote the debugging tool, now the two parts that used hard to impossible to maintain python use perl, which is insanely fast, works natively on every system (solves fatal flaw of python: unstable language features). So I'm glad you noticed, heh. Example, old system, python 2.5, never worked. Python 2.6, 2.7, worked. Python 3.x, assume fail. In contrast, same old system, perl 5.10, perfect. New system, perl 5.26, works. Python was too clumsy and imprecise to correct a new kernel /sys tree walk hang as well, which was an issue I discovered with new kernels.

In a perfect world I'd rewrite inxi in perl because it solves all the headaches and cludges inxi uses to pass data around. Plus perl is insanely fast compared to python, etc. Plus something totally impossible to predict in early inxi days, when perl 6 was coming 'soon', a core requirement for inxi is that the latest inxi runs on the oldest system. I wish I had known back then that perl 6 would be new language, and that perl 5 would become an actively maintained independent project but this was impossible to foresee..

Note: while I will notice and respond to issues here, I will forget all about these forum threads once they are inactive. So a fine strategy is to comment here, but file issue reports on github.com/smxi/inxi because those remain active until I close them, ie, I don't forget them even if an issue might take a year or more to resolve. In fact, I file issues myself too in hopes they will motivate me so I can close the issue.

I'd post a single github issue on the SAS doubling, I don't know if it's fixable but it's certainly valid.
Back to top
techtoys
Status: Curious
Joined: 28 Oct 2016
Posts: 5
Reply Quote
That's good to hear about perl from an outside source, and will pass on to our mgmt team here.

I've observed the same with perl; you can produce fast, bulletproof code, and it works whereever you deploy it,
while here, we have both engineers and operations team members asking for python updates (to get something to work
properly) , requiring installing multiple versions concurrently on a system.

Will open the issue for the SAS doubling.

Thanks.
Back to top
Display posts from previous:   
Page: 1, 2  Next
All times are GMT - 8 Hours