Page: Previous  1, 2, 3 ... , 12, 13, 14  Next

kode54
Status: Interested
Joined: 19 Nov 2017
Posts: 13
Reply Quote
How about ZFS on Linux? I see it's currently only supported by the script on BSD platforms. It really confuses my disk usage stats, as it sees my 20TB of "unmounted" filesystems as 99.9% free space.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4126
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
I could have sworn that I had linux zfs support, but it's old code, from before zfs on linux was stable or a real option, so it is in fact bsd specific.

I'll take a look at that logic and see if I can work out any way to switch, but technically, I was only handling one software raid type per type, linux or bsd, which made the various tests easier.

To make linux handle one or more raid types will require some careful adjustments, but should be possible, though the error messages won't have much room to cover both cases for linux.

Basically I will have to check mdraid first, remove the bsd only zfs tests, then set more flags to let the system know which if any were found, and it will cascade, meaning a system with mdraid and zfs would show mdraid only, otherwise it's too complicated.

The raid info wasn't one of the more successful things I did in inxi sad to say, but I think I can resolve the failure to show zfs in linux.

However, with this said, I don't believe the raid data will be fixing your total disk used percents, unless I actually got that part working in the past.

There's very little logical connection between partition, disk size, and raid use in inxi, that's primarily the outcome of haw truly terrible the options in bash for passing complex data around really are when restricted to bash and gawk.

Thanks for the reminder, my first answer actually said there was zfs support for linux, but luckily for me I checked the code before posting that one, lol.

So yeah.

I may be able to get this working within the week depending on work etc, it takes care due to the complexity of raid handling. That complexity is also when I never tried to extend it beyond mdraid or zfs.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4126
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
inxi: 2.3.49

I've I think resolved this issue by making zfs simply the first software raid option, and mdraid the second.

If no raid is found with zfs, it checks for mdraid.

Under no scenario will inxi be able to show both, not without a lot of work and testing and my having direct access to systems that would have that setup, which I don't.

Note that the only thing that uses the raid partition data is the unmounted partition section, the hard disk size percent used does not account for raid arrays, that's simply too complicated to do.

I can't remember how partitions handle raid, it's been a while since I had my hands on a software raid system
Back to top
kode54
Status: Interested
Joined: 19 Nov 2017
Posts: 13
Reply Quote
That reports my RAID, but reports all of my devices as "FAILED" instead of "online".

I have uploaded inxi-umaro-20171128-174804-all-root.tar.gz, probably way too verbose for your needs.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4126
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
no, it's exactly the right verbosity since I don't know where the failure would come from.

Note that doing core logic changes without being able to verify they work is always risky, and usually requires a few attempts before they really work.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4126
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
the output of linux zpool is significantly different from bsd zpool output, so this won't be hard to fix.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4126
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
As noted, there are a lot of key and core differences between the bsd zfs and this linux one.

the FAILED is being tripped because all the long strings that identify the components all literally happened to contain an 'F' upper case character, well only one had to contain it, and most of them do.

Likewise, I never saw a bsd zfs with such long component names, so inxi never added wrapping handlers for those component names.

The F test is tricky because if I fix it for linux, it will break for bsds, fortunately I just remembed I do have access to a freebsd system with zfs so I should be able to test changes there for bsds.

Note that typical bsd component names are: ad1, ad2, etc.

So inxi did not require any real wrapping logic there.

I didn't have logging in the second part of the zfs raid data tool, I'll add taht so I can get the debugging data needed.

for now, please show this:

:: Code ::
zpool status stor


that was missing in the debugger.
Back to top
kode54
Status: Interested
Joined: 19 Nov 2017
Posts: 13
Reply Quote
:: Code ::
  pool: stor
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: resilvered 3.33G in 0h1m with 0 errors on Mon Nov 20 19:47:02 2017
config:

        NAME                                          STATE     READ WRITE CKSUM
        stor                                          ONLINE       0     0     0
          mirror-0                                    ONLINE       0     0     0
            ata-TOSHIBA_HDWE160_278AK9P4F56D          ONLINE       0     0     0
            ata-TOSHIBA_HDWE160_96TIK3F8F56D          ONLINE       0     0     0
          mirror-1                                    ONLINE       0     0     0
            ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6CJP3R4  ONLINE       0     0     0
            ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4JP359R  ONLINE       0     0     0

errors: No known data errors


Also note, this is only if the devices are attached by device identifier. They may also be attached by simply "sdb" "sdc" "sdd" "sde" as they were when I first imported the pool.

In the Toshiba drive strings, the Fs were in the serial number, while with the WD drives, they're in the model number.

Also, your forum settings are broken, they won't let me edit my post within the multi-post time limit.
Back to top
techAdmin
Status: Site Admin
Joined: 26 Sep 2003
Posts: 4126
Location: East Coast, West Coast? I know it's one of them.
Reply Quote
Note that there is already one situation that inxi is simply not designed to handle in your case, you have two mirrors in the same device, which I've never actually seen myself in any data, I didn't even know that was possible, I was always assuming 1 device has one raid thing in it, not one or more.

I have fixed some of the false id for failed case, which was fortunate that your case triggered, but the extremely verbose component names are weird, not sure where linux comes up wtih those, so that is not going to be easy to fix immediately.

I'll look at the rest of the data, then do another fix commit in a day or two.

inxi 2.3.50 will fix some, but not all issues with zfs on linux.

Your case is particularly complex due to the extremely verbose component names, and the fact you have two mirrors.

The edit time limit is required due to ongoing issues with spammers, sorry, that's not broken, it's working, unfortunately the forums consider a post and an edit as the same thing, that's just how it is.

Technically I'm glad you have this odd setup with complicated names, because that's a scenario that the raid part of inxi was never built to handle, so it's good to know what is possible re output etc.
Back to top
kode54
Status: Interested
Joined: 19 Nov 2017
Posts: 13
Reply Quote
Oh, yeah, I noticed I am able to run zpool status and such as non-root now, maybe because I added myself to the "disk" group, or maybe because of other permissions on my Arch setup?

The two mirrors are striped. It is an unfortunate choice of setup, a result of starting with two smaller sized drives, then incorporating two larger sized drives.

Had I planned ahead, I would have stuck with multiple drives of the same capacity, and gone with either raidz1 or raidz2.

And I completely understand about handling spammers. I have an ElkArte powered forum that receives the occasional bit of spam, even though it is behind an OpenResty nginx server with a Lua (LuaJIT powered) script that passes each request against a DNSBL, StopForumSpam, which seems to work for the most part. All requests to the DNS are cached by a local installation of named, which forwards them to either the ISP's DNS or Google DNS. Anyone fitting certain DNSBL criteria is immediately thrown a 4xx error for all requests to the server.
Back to top
Display posts from previous:   
Page: Previous  1, 2, 3 ... , 12, 13, 14  Next
All times are GMT - 8 Hours