Search Results

Search found 2 results on 1 pages for 'rhymoid'.

Page 1/1 | 1 

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • Hungry hungry BIOS: why do I have less than 4 GiB of memory?

    - by Rhymoid
    I thought I had 4 GiB of memory, but just to be sure, let's ask the BIOS about that: ?: sudo dmidecode --type 20 # dmidecode 2.12 SMBIOS 2.6 present. Handle 0x000B, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00000000000 Ending Address: 0x0007FFFFFFF Range Size: 2 GB Physical Device Handle: 0x000A Memory Array Mapped Address Handle: 0x000E Partition Row Position: Unknown Interleave Position: Unknown Interleaved Data Depth: Unknown Handle 0x000D, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00080000000 Ending Address: 0x000FFFFFFFF Range Size: 2 GB Physical Device Handle: 0x000C Memory Array Mapped Address Handle: 0x000E Partition Row Position: Unknown Interleave Position: Unknown Interleaved Data Depth: Unknown Alright, 4 GiB it is. But I can't use all of it: ?: cat /proc/meminfo | head -n 1 MemTotal: 3913452 kB Somehow, somewhere, I lost 274 MiB. Where did 6% of my memory go? Now I know the address ranges in DMI are incorrect, because the ACPI memory map reports usable ranges well beyond the ending address of the second memory module: ?: dmesg | grep -E "BIOS-e820: .* usable" [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009e7ff] usable [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000dee7bfff] usable [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000117ffffff] usable I get pretty much the same info from /proc/iomem (except for the 4 kiB hole 0x000-0xFFF), which also shows that the kernel only accounts for less than 8 MiB. I guess 0x00000000-0x7FFFFFFF is indeed mapped to the first memory module, and 0x80000000-0xDFFFFFFF to part of the second memory module (a bunch of ACPI NVS things live between 0xDEE7C000 and 0xDEF30FFF, and the remaining 16-something MiB of that range are just 'reserved'). I guess the highest 0x18000000 bytes of the second memory module are mapped above the 4 GiB mark. But even then, there are two problems: 128 MiB (0x08000000 bytes, living somewhere between 0xE0000000 and 0xFFFFFFFF) are still completely unaccounted for. To note, my graphics card is on PCI-Express and (allegedly) has 1 GiB dedicated memory, so that shouldn't be the culprit. Did the BIOS screw up in moving the memory, leaving it partially shadowed by MMIO? Even with this mediocre explanation, I only 'found' 128 MiB. But /proc/meminfo is reporting a much larger deficit; where's the other 146 MiB? How does Linux count MemTotal?

    Read the article

1