[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: report a Debian bug



On 11/2/23 01:05, gene heskett wrote:
On 11/2/23 01:46, David Christensen wrote:
On 11/1/23 18:34, gene heskett wrote:

So what i'm going to do next is transfer my /home, the whole MaryAnn from a 4 drive raid10 to a single 2T SSD,, and then switch /home from the raid to a single drive, thereby removing the raid10 from the culprit list.


The df(1) output, below, indicates that /home is still on RAID10.


If that doesn't fix it, then its something to do with the graficks.
I have one real clue, some one the openscad list said the preview mode opened a bunch of gfx buffers it used the preview mode that it didn't do in final build mode, so I ran some tests on a fairly complex project, noting that the openscad logging window said a preview was done in a bit over a second but the final monocolored version took over 9 seconds to render, giving me an image I could move in 3d space in the preview window in real time.  But that 1.2 second preview render took 30 to 45 seconds to show in\on screen and every update of that image took at least 45 seconds to register on screen, and during that 45 seconds or more, openscad was frozen from all other input. That is as close to a clue as I have, also manifested when any kind of a file requester needs to be drawn on screen it is subjectto to that same time killing lockup except it might be as long as 5 minutes before the requestor is drawn on screen.  That raid was built by buster, worded fine for bullseye, but seems to be a disaster for bookworm, yet an fsck finds nothing wrong.

Those are the clues I have. Nothing obvious and common to all this contaminates the logs.

I own the whole nearly 1.8T raid10, every byte of it.  Why can't I use it? Instead its holding me for an undefined ransom in time that I at 89 don't have a lot of left.

I assume this is the computer with the Asus PRIME Z370-A II motherboard,   Intel i5-9600K processor, 32 GB RAM, WD Black 2 TB NVMe PCIe 3.0 x4 SSD, a few PCIe x1 SATA III HBA's with many ports, many SATA III SSD's and/or HDD's, and a few md RAID (?).

Yup, I just added another 3.6Tb of sdd's that are still unformatted. Gigastone's from Taiwan.


4 @ 1 TB or 2 @ 2 TB? If and when you configure them, please note that in future posts.


Did you connect the /home RAID10 4 @ 1 TB SSD's to one PCIe x1 HBA?

I assume its an X1 card, small footprint, cheap version with 10 bondouts but only 6 have drive cable sockets installed. Only 4 for the raid10 are actually used.


One PCIe 3.0 lane is rated for 985 MB/s, which is less than the sum of the throughputs for 4 @ SATA III SSD's. But, it should not cause the openscad issues.


I have another card I haven't tried, is a full 16 ports but is also the narrow pci-e footprint. Neither is sold as a raid card, so its all mdadm.


I recall discussing that about two months ago. Similar to the 6 port HBA, it will limit the aggregate throughput; but should not cause openscad issues.


This worked perfectly for buster and bullseye, so it seems this problem is bookworms problem.


If you have a Buster or Bullseye drives or system images, they could be useful for testing.


Are your file systems full?
/home is currently about 20% used, everything else is much lower yet.

# df /boot / /tmp /home
The whole thing:
gene@coyote:~$ df
Filesystem      1K-blocks      Used  Available Use% Mounted on
udev             16329748         0   16329748   0% /dev
tmpfs             3272676      1880    3270796   1% /run
/dev/sda1       863983352  15989096  804032608   2% /
tmpfs            16363376      1232   16362144   1% /dev/shm
tmpfs                5120         8       5112   1% /run/lock
/dev/sda3        47749868       260   45291600   1% /tmp
/dev/md0p1     1796382580 327360472 1377697132  20% /home
tmpfs             3272672      3768    3268904   1% /run/user/1000


Okay -- no free space issues.


What drive corresponds to /dev/sda? If not the WD Black 2 TB NVMe drive, then is the WD Black installed and configured?


Do your file systems need defragmenting?

# e4defrag -c /boot / /home
dnk, wasn't fam with how to do it, And worst case score for that cmd is 1.
/home was the last check:

<Fragmented files>                             now/best       size/ext
1. /home/gene/.mozilla/firefox/f8j7d2lj.default-esr/storage/default/https+++www.wsaz.com/ls/data.sqlite
                                                  7/1              4 KB
2. /home/gene/.mozilla/firefox/f8j7d2lj.default-esr/storage/default/https+++apps.sascdn.com/ls/data.sqlite
                                                  7/1              4 KB
3. /home/gene/.mozilla/firefox/f8j7d2lj.default-esr/storage/default/https+++www.euronews.com/ls/data.sqlite
                                                  6/1              4 KB
4. /home/gene/.mozilla/firefox/f8j7d2lj.default-esr/storage/default/https+++www.britannica.com/ls/data.sqlite
                                                  6/1              4 KB
5. /home/gene/.mozilla/firefox/f8j7d2lj.default-esr/storage/default/https+++travelermaster.com/ls/data.sqlite
                                                  6/1              4 KB

  Total/best extents                             1245442/1166289
  Average size per extent                        262 KB
  Fragmentation score                            1
  [0-30 no problem: 31-55 a little bit fragmented: 56- needs defrag]
  This directory (/home) does not need defragmentation.
  Done.


Okay -- no fragmentation issues.


Thank you David.


YW.  :-)


The "PRIME Z370-A II Series" user's manual indicates various RAID and non-RAID storage configurations using PCIe slots, PCIe Hyper M.2 cards, M.2 sockets, M.2 drives, SATA 6.0 Gb/s ports, SATA drives, and Setup NVRAM settings. The details are non-trivial -- perhaps your system is mis-configured (?):

https://www.asus.com/us/supportonly/prime%20z370-a%20ii/helpdesk_manual/


Please examine the following NVRAM settings:

BIOS Setup
  Advanced menu
    PCH Storage Configuration
      SATA Mode Selection			AHCI


If the WD Black is installed in socket M.2_1:

BIOS Setup
  Advanced menu
    Onboard Devices Configuration
      M.2_1 Configuration			PCIE mode


If the WD Black is installed in socket M.2_2 (ports SATA6G_5 and SATA6G_6 will be disabled in X4 mode):

BIOS Setup
  Advanced menu
    Onboard Devices Configuration
      M.2_2 PCIe Bandwidth Configuration	X4


To check the hardware and NVRAM configuration, tear down the system to just the motherboard and one SATA drive with Bookworm, reset Setup NVRAM to defaults, and test. If the openscad issue is present, try Bullseye and/or Buster. If the issue is absent, continue adding hardware, configuring NVRAM, and testing until either you find the problem or the system is complete.


David


Reply to: