Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 402/1078 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Where does netstat get the process name?

    - by tjameson
    I am developing a node application and there is an option to set the process title (process name). This only sets it in some tools (like ps and top), but not in htop or netstat. I found this article that explained how most applications do it, but it doesn't change in netstat. That lead me to wonder where those programs are getting the process name. Would they be getting it from /proc/##/cmdline? (## being the PID of the process) I figure messing with things in /proc is a bad idea (and probably not possible), so if this is where those programs are getting it, is there a way to change it?

    Read the article

  • How to set TV-out options under Linux of an Geforce 9600 GT video card

    - by polemon
    I'm using the TV-out connector of my Geforce 9600 GT to connect it to an old TV set. It's obviously in Composite mode, the other two cables of Component video are dead, only Pb/VIDEO labeled one gives me a signal. The picture appears black/white on the TV, I presume it's because the video card gives me an NTSC signal, but it's a PAL tv set. How do I change the TV-out from NTSC to PAL? My Component to SCART adapter hasn't arrived yet, but I think I should be able to set manually, whether the signal should be Composite or Component. How do I switch modes of the TV-out, between Component and Composite? I'm running Linux, so it's probably some settings I need to make in xorg.conf. Edit: I got this far: I need to set in the "Device" section of my xorg.conf: Option "TVStandard" "PAL-B" Option "TVOutFormat" "COMPOSITE" The whole section looks like this now: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9600 GT" Option "AddARGBGLXVisuals" "True" Option "TVStandard" "PAL-B" Option "TVOutFormat" "COMPOSITE" EndSection How can I list all available settings for "TVStandard" and "TVOutFormat"?

    Read the article

  • How can I register a custom protocol with xdg?

    - by julien
    I've been struggling this morning trying to associate an application with a custom protocol, namely emacsclient and org-protocol. I'm calling this protocol from a webbrowser bookmarklet, and I get the following behaviour : In chromium, the "Launch Application" dialog comes up, and calls xdg-open org-protocol://... which ends up firing a new chromium frame. In firefox, I've tried setting network.protocol-handler.app.org-protocol to an empty string or my emacsclient path, anyhow I get the following error message : "Firefox doesn't know how to open this address, because the protocol (org-protocol) isn't associated with any program" without even showing any external application selection dialog. I'm not using any desktop environment, so I need to make this work strictly with xdg, however, despite reading the shared mime info spec etc, I still can't fathom a working configuration.

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • How could I portably split large backup files over multiple discs?

    - by sourcejedi
    Context: I make backups / archives, primarily of photos. I'm experimenting with Bup, which is designed for backup to hard disk. Basically it creates Git repos which include packfiles of up to 1GB. But I still need last-ditch backups to keep offline and move offsite (and keeping them on read-only media is good too!). What are the options for archiving and splitting large files over several discs like CDs (and reading them back!)? I'd prefer methods which will stay readable in future. are portable e.g. to Windows. have known simple implementations, so I could re-implement them myself if necessary. (Using Bup packs will stretch my robustness budget. So I want to be confident about how other parts of the system would behave). I heard split archives are possible with both ZIP and 7-Zip. Is that right?

    Read the article

  • GNU-Screen still has only old groups for my username.

    - by Dan
    I was recently added to a group on the unix server. My active screen session has not been update to the new groups: $groups A B C D $screen -r $groups A B C Without closing my screen session is there a way for me to use my new privileges in the screen session? Or if not, is there at least a way I can save all of the different directories each of the tabs are on? Thanks, Dan

    Read the article

  • Recovering/Creating NewWorld Partition on Mac G4 (PPC) after botched Debian Install

    - by Luis Espinal
    I was trying to install Debian 5.04 on a Mac G4, and in typical geek tradition, I didn't RTFM. During installation, I nuked all existing partitions, creating new to my liking. But as I learned later during the installation process, yaboot needed a NewWorld partition, so I can't boot the installation. I don't have any OSX CDs with me (this is a used G4 I purchased of craigslist) with which to create a HFS partition. I've re-run the Debian installer, which lets me create a partition that is supposed to be of type 'NewWorld', but the installer does not seem to like it or recognizes it. Any ideas how to proceed from here? Thanks.

    Read the article

  • Forcing tar to create an empty archive

    - by snostorm
    I'm trying to use tar to tar files before transfer, so I can keep the entire file path rather than losing it along the way. However, when I try to tar an empty folder, it tells me that it is cowardly refusing to create an empty archive. I want to keep the empty folder on the other end, but don't want to put anything else into the archive to make it non-empty. Is there any way to do this?

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • Change permission to /proc/net/ip_conntrack on Ubuntu server 9.10

    - by bjarkef
    Hi I have a script that needs to extract certain information form the /proc/net/ip_conntrack file once in a while. I do not wish to run this script as the root user. Default permissions for the file is: $ ls -lah /proc/net/ip_conntrack -r--r----- 1 root root 0 2010-03-28 12:18 /proc/net/ip_conntrack I can change it with: sudo chmod o+r /proc/net/ip_conntrack But that does not stick after a reboot. Is there some configuration file for file-permissions in the /proc directory in Ubuntu Server 9.10? Or do I just have to stick a chmod line in some startup script?

    Read the article

  • How can I turn off font-antialiasing only for gnome-terminal, but not for other applications?

    - by dan
    I'm running GNOME (gnome-session under xmonad). I want to turn off antialiasing (i.e. use monochrome mode) for fonts in gnome-terminal. But I want to retain antialiasing for other applications, like Firefox. Is this possible? Antialiasing is great and almost necessary for using Firefox or Chrome. But it makes the fonts in gnome-terminal blurry at sizes around 12 or smaller. Otherwise, I'll just have to use xterm, which seems not to anti-alias its fonts under any circumstances.

    Read the article

  • how to figure out why sites on my server aren't loading

    - by Derek
    I seem to randomly receive "page cannot load, cannot connect to server" errors for sites on one of my servers. when this happens, it seems to only happen on certain IPs or IP ranges at a time. I say this because while I'll get the error from my home laptop I'll be able to access the site fine from my work computer or from an offsite VPS. DNS records should already be fully propagated as these records were updated months ago. I have no idea how to diagnose what's going on. Is there a tool in cpanel or outside on the web that can help me figure out what's going on?

    Read the article

  • Adding user to chroot environment

    - by Neo
    I've created a chroot system in my Ubuntu using schroot and debrootstrap, based on minimal ubuntu. However whenever I can't seem to add a new user into this chroot environment. Here is what happens. I enter schroot as root and add a new user.(Tried both adduser and useradd commands) The username lists up in /etc/passwd file and I can 'su' into the new user. So far so good. When I log out of schroot, and re-enter schroot, the user I created has vanished!! There is no mention of that user in /etc/passwd either. How do I make the new user permanent?

    Read the article

  • How can I avoid SSH's host verification for known hosts?

    - by shantanuo
    I get the following prompt everytime I try to connect a server using SSH. I type "yes", but is there a way to aovid this? The authenticity of host '111.222.333.444 (111.222.333.444)' can't be established. RSA key fingerprint is f3:cf:58:ae:71:0b:c8:04:6f:34:a3:b2:e4:1e:0c:8b. Are you sure you want to continue connecting (yes/no)?

    Read the article

  • How to determine if a file has been backed up?

    - by Console
    I try to consolidate old drives to new ones of larger capacity. Sometimes files have been renamed, but are otherwise identical. Sometimes an old directory has just a few more files in it than a newer directory with the same name. Sometimes a file has the same name but the size differs. So I often find myself asking the question: Are there any files on this old drive or directory that I haven't already copied to the new drive? I just want to know that I have the files, I don't want to try and sync stuff automatically (Syncing tools tend to just sync, creating duplicate folder structures and other problems, so I prefer to do it by hand). Basically, if an old drive has a file called "foo.bar" ten directories deep, and my new big drive has an identical file called "oldstuff.zip" in the root, I just want a "yes you have it" or "no, unique files exist". Is there a free tool, a script or a quick and easy method (Mac/Unix or Windows) to get the answer?

    Read the article

  • Make `mount` choose a device

    - by o_O Tync
    I have a mount point — let it be /media/question — and two possible devices: a physical HDD and a remote NFS folder. Sometimes I plug the device in physically, in other cases I mount it via NFS. Is there a way to specify both of them in fstab so that executing mount /media/question will preferably choose physical volume, and when it's not available — NFS?

    Read the article

  • Log ports opened by an application

    - by Simon A. Eugster
    I'm searching for something like: tcpdump -p PID # But tcpdump does not know the PID or lsof -i --continuous # But lsof just runs and exits, no «live logging» to log which connections an application opens. In my case, I want to find out to which port git connects when committing. This happens in a fraction of a second, so I cannot use lsof. If there is a lot of traffic, filtering by PID or process name would be useful.

    Read the article

  • Why grep -i is so slow? How to do it faster for ASCII?

    - by Vi.
    $ time lzop -d < tvtropes-index.lzo | egrep -B 5 '[Dd][eE][sS][cC][eE][nN][dD] ?[Ff][rR][oO][mM]' real 0m0.438s $ time lzop -d < tvtropes-index.lzo | egrep -B 5 'descend ?from' -i real 0m11.294s Both search insensitively. why -i so slow? How to make fast grep -i without entering things [iI][nN] [tT][hH][iI][sS] [wW][aA][Yy]? For example, perl -ne 'print if /descend ?from/i' works fast, but '-B 5' is not as trivial to implement as in grep (as well as other options).

    Read the article

  • pam_tally2 causing unwanted lockouts with SCOM or Nervecenter

    - by Chris
    We use pam_tally2 in our system-auth config file which works fine for users. With services such as SCOM or Nervecenter it causes lockouts. Same behavior on RHEL5 and RHEL6 This is /etc/pam.d/nervecenter #%PAM-1.0 # Sample NerveCenter/RHEL6 PAM configuration # This PAM registration file avoids use of the deprecated pam_stack.so module. auth include system-auth account required pam_nologin.so account include system-auth and this is /etc/pam.d/system-auth auth sufficient pam_centrifydc.so auth requisite pam_centrifydc.so deny account sufficient pam_centrifydc.so account requisite pam_centrifydc.so deny session required pam_centrifydc.so homedir password sufficient pam_centrifydc.so try_first_pass password requisite pam_centrifydc.so deny auth required pam_tally2.so deny=6 onerr=fail auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 minclass=3 minlen=8 lcredit=1 ucredit=1 dcredit=1 ocredit=1 difok=1 password sufficient pam_unix.so sha512 shadow try_first_pass use_authtok remember=8 password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so The login does work but it also triggers the pam_tally counter up until it hits 6 "false" logins. Is there any pam-ninjas around that could spot the issue? Thanks.

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >