Search Results

Search found 13128 results on 526 pages for 'square root'.

Page 334/526 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Page File - Why set one for each drive?

    - by Magic
    Hello, I have Windows Vista Business Edition running on my laptop (brand is HCL). I have 4 HDD which are as follows - C - 29.2 GB (of which only 3.68 GB is free) D - 39 GB (of which 37.8 GB is free) E - 39 GB (of which 37.3 GB is free) F - 41.6 (of which 41.4 GB is free) However, my page file settings are as below. Automatically manage paging file for all drives. Question - Why should I set one for each drive? Should I set my page file on the OS Root Drive. I happen to talk to a System Administrator in an IT company and he advised that we should never set the page file on the OS drive but on an alternate drive wherever possible. It would be really helpful, if you can guide me here or at least point me to the right resources so that I can read about paging and best practices of paging. Cheers,

    Read the article

  • How to get uncaught PHP errors from fcgi server

    - by jason
    My web hosting company recently replaced suPHP with fcgi on my dedicated server because I needed opcode functionality. Since then I see loads of 500 errors in apache error and php error log is empty. I have no way to figure out whats the root cause. One reason I found out was time out so my hosting company changed FcgidConnectTimeout and FcgidIOTimeout to a value of 200. I believe there are no more timeout errors in my php script. My question is how do I capture PHP error before 500 internal server error page display to user? I am using Centos 5.8 server, WHM 11.34.0 (build 9), PHP 5.3.18 and Apache/2.2.23 (Unix) mod_ssl/2.2.23 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6

    Read the article

  • Can't figure out how to make Slitaz USB persistent

    - by Dennis Hodapp
    I installed Slitaz on my USB. However I can't figure out how to make it persistent automatically. There are different sources telling me different ways to make it persistent. One told me to add "slitaz home=usb" to the syslinux.cfg file like this: append initrd=/boot/rootfs.gz rw root=/dev/null vga=normal autologin slitaz home=usb but it didn't work for me. http://www.slitaz.org/en/doc/handbook/liveusb.html gave an example of how to do it manually but I didn't try it and I also want it to happen automatically. custompc.co.uk/features/602451/make-any-pc-your-own-with-linux-on-a-usb-key.html is an older article that also explains how to make the USB persistent but I don't want to try it cause it looks outdated (from 2008) does anyone know the best way to make the USB automatically persistent?

    Read the article

  • Apache does not serve non-locally

    - by yodaj007
    I have a freshly installed instance of Fedora Core 16 inside VirtualBox using bridged networking. On it, as root I typed in: yum -y install httpd service httpd start ifconfig Inside the VM, I can open a web browser to 'localhost' and I get the Apache test page. It works. But in Windows (the machine hosting the VM), I point my browser to the IP address returned by ifconfig (192.168.2.122). The connection times out. I can go to a command prompt and ping the VM. Is there a firewall or something that comes with Fedora by default? Or is there something I need to change in a config file?

    Read the article

  • CentOS - Yum doesn't update anymore?

    - by Xanathos
    I've been trying to use yum now, but for some reason, not even the search work anymore. I even tried putting packages I already downloaded in the search criteria and is the same. [root@AMDFX03 Downloads]# yum search glibc Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile epel/metalink | 22 kB 00:00 * base: centos.secrel.com.br * epel: archive.linux.duke.edu * extras: centos.secrel.com.br * rpmforge: apt.sw.be * updates: centos.secrel.com.br adobe-linux-x86_64/primary | 1.2 kB 00:00 http://linuxdownload.adobe.com/linux/x86_64/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from adobe-linux-x86_64: [Errno 256] No more mirrors to try. This error always appear no matter what I do. Please, can you tell me how to fix this, or at least how to reset yum's configuration?

    Read the article

  • /data/tmp on database server?

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. My teacher gave out an assignment which mentioned "copy cars.dat to /data/tmp on the MySQL database server" without any explanations, I do not know what is the "/data/tmp on database server" means exactly? Basically after that I need to execute SQL statement like LOAD DATA INFILE '/data/tmp/cars.dat' INTO TABLE cars So, what does copy cars.dat to /data/tmp on the database server means as there is no /data/tmp directory even? Personally, I checked /etc/mysql/my.cnf file, inside which there are definitions of : ... basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp ... Does it mean to copy cars.dat to the tmpdir which is just /tmp under root directory??

    Read the article

  • Is giving read permissions on /etc/shadow to apache user a wise decision from security point of view?

    - by Czar
    I have to use PAM authentication for DAV SVN, but when everything is configured as specified in mod_auth_pam documentation, authentication does not work. After some research I realized, that for this to work, httpd should be running under root user (which I don't like and won't implement) or apache user (under which httpd is running by default) should have permissions to read /etc/shadow file. So there is a pair of questions connected to each other which I want to ask: Is giving this permition to apache user a wise decision from security point of view? If answer to the first question is "yes", what is the correct way to do so? For now I've done following: groupadd shadow usermod -G shadow apache chmod g+r /etc/shadow Another way I can come up with is using acl: setfacl -m u:apache:r /etc/shadow Note: OS is Fedora 14 x86_64 (kernel: 2.6.35.11) httpd v2.2.17 mod_auth_pam v1.1.1

    Read the article

  • Getting molly-guard to work with sudo

    - by 0xC0000022L
    The program molly-guard is a brilliant little tool which will prompt you for a piece of information before you reboot or shut down a system. Usually it asks for the hostname. So when you work a lot via SSH, you won't end up taking down the wrong server, just because you were in the wrong tab or window. Now, this works all fine when you say reboot on the command line while you are already root. However, it won't work if you do sudo reboot (i.e. it won't even ask). How can I get it to work with sudo as well? System: Raspbian (latest, including updates), package molly-guard version 0.4.5-1.

    Read the article

  • Can Solaris RBAC roles be ported to Linux using SElinux only?

    - by Jimmy
    We are migrating an application from Solaris to Linux and the main user is allowed, through the use of RBAC roles, to run a few system commands like svccfg/svcadm (chkconfig on redhat). Is it possible, using only SElinux (no sudo), to allow a normal user to run chkconfig off/on (basically give it the ability to add remove services) ? My approach was to try to create an SElinux user with a corresponding SElinux role that manages the app's domain/type and is allowed to transition to all other domains required to run chkconfig, tcpdump or any other system utility usually restricted to root access only. All my attempts so far have failed, so my second question would be where could I find good documentation that applies to this specific problem ?

    Read the article

  • How to access remotly to a mysql server?

    - by ÉricP
    Hi, I'm trying to access my remote mysql server from my own computer. I uncommented: bind-address = 80.10.65.45 I added 80.10.65.45 as a server in privilege root 80.10.65.45 yes ALL PRIVILEGES yes I'm using Sequel Pro on MacosX to connect via SSH here is the debug log: debug1: Authentication succeeded (password). debug1: Local connections to LOCALHOST:58517 forwarded to remote address 127.0.0.1:3306 debug1: Local forwarding listening on ::1 port 58517. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 58517. debug1: channel 1: new [port listener] debug1: Entering interactive session. debug1: Connection to port 58517 forwarding to 127.0.0.1 port 3306 requested. debug1: channel 2: new [direct-tcpip] channel 2: open failed: connect failed: Connection refused debug1: channel 2: free: direct-tcpip: listening port 58517 for 127.0.0.1 port 3306, connect from 127.0.0.1 port 58519, nchannels 3

    Read the article

  • Creepy MySQL error during hard work

    - by Kiewic
    Hi, I have a MySQL database installed on a OpenSuse 11.1 server (it is a Bitnami image). The database works fine, it can stay many days without any error, but when MySQL receives a huge amount of transactions, it dies immediately. The next screen shows the error: Moreover, I don't know how to restart MySQL. I have tried this: /opt/bitnami/mysql/bin/mysqld start But it doesn't work, that gives me the next output: 110209 17:09:01 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root! 110209 17:09:01 [ERROR] Aborting 110209 17:09:01 [Note] /opt/bitnami/mysql/bin/mysqld.bin: Shutdown complete It doesn't matter which kind of statements are executing, if they are a huge amount, MySQL dies. The MySQL server version is 5.1.30 What can be causing these sudden failures?

    Read the article

  • mod_rewrite directory path to deeper directory

    - by DA.
    I don't usually work with LAMP and am a bit stumped getting a site working locally. The site is set up to be used via localhost: 1) http://localhost/mysite However, the way the site files are physically on the server the root is located as such: 2) /var/www/mysite/trunk/site I'm trying to figure out a way where I could type #1 but have apache actually looking for the files in #2 so that all of the asset/page links in the web application work. Is mod_rewrite the solution? If so, I'm stumped on the syntax. I have this but it won't work (due, I assume, to it causing an infinite loop) RewriteRule ^mysite/ mysite/trunk/site I have a hunch I need to sprinkle on some regex?

    Read the article

  • Cloning a linux system from sdx to cciss

    - by churnd
    I have an HP ML 310 server running CentOS Linux 5.5. I'm buying a RAID card (LSI 9260-8i) to set up a mirrored OS drive. Right now, the boot drive is set up with GRUB installed on the MBR of /dev/sda & has a 100MB /boot partition for /dev/sda1, then the rest is configured in LVM with a 20GB with a 20GB VG for the root partition & ~80GB VG for home. The new disk sizes will also be slightly larger as well. What is the best way to clone the boot drive to the new CCISS device?

    Read the article

  • Is it possible to write an htpasswd file that redirects certain users to certain directories?

    - by racl101
    Hi there, Assuming that I have a domain, say, mydomain.com and three directories under it call them dir1, dir2 and dir3, then is it possible to put an htpasswd file at the web root and have it redirect authenticated users to their respective directories (e.g. user joecorleone101 always get redirected to dir3 upon being authenticated.)? First of all I want to know if this is possible and second is it safe or practical (or impractical)? Am I thinking about it the wrong way? Should I maybe use an htaccess file?

    Read the article

  • Transfer many Gigabytes between two servers

    - by Bernhard
    Hello, I have a big problem. I have to move data from an old Webspace which is only accessibla by ftp. The new root server is accessible by ssh of course :-) I need to move all the data from the old space but the amount is just huge. Is there a way to move all the files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but it didn't work. I think I´ve used the wrong commands. Is there a way to do this? Thank you in advance Bernhard

    Read the article

  • How to revert to "last known good configuration"

    - by Ripley
    Hi Guys. I failed to install ubuntu 10.04 with WUBI, for some reason it's showing me the root partion is not defined. I'm bored to fight with it so I just removed ubuntu in windows. However this installation made my original Windows XP cripple, a normal boot will end up with a blue screen, error code 7E, I'm still able to boot with the 'last known good configuration' tho. My understanding is booting like this will recover things and I'm supposed to be good when reboot, while this is not the case for me, I have to choose the 'boot from last known good configuration' each and every time to work around the blue screen. Could you suggest how could I resolve this? I feel it's foolish having to waste 10 more seconds each time starting the OS.

    Read the article

  • sftp chroot access via SSH

    - by Cudos
    Hello. I have this setup in sshd_config: AllowUsers test1 test2 Match group sftpgroup ChrootDirectory /var/www X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp Match user test2 ChrootDirectory /var/www/somedomain.dk X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp I am trying to restrict test2 to only use /var/www/somedomain.dk For some reason when I try to login e.g. with Filezilla on account test2 I get this error: "Server unexpectedly closed network connection" The users are created and works. the SSH service has been stopped and started. test1 works when using e.g. filezilla and the root of the connection is /var/www. What am I doing wrong?

    Read the article

  • Make IP Address point to webroot instead of virtual hosts' documentroot

    - by Reuben L.
    I used to have a one-to-one domain name and IP. Recently I've paid for a second domain name and decided to host it on the same box and IP. As such, I added virtualhosts to point each domain name to a different document root (i.e. /var/www/webbie1 and /var/www/webbie2). The question I have is, can I still make the IP, e.g. http://XXX.XXX.XXX.XXX, point to the webroot, i.e. /var/www/? If so, how do I go about doing it? For a fuller picture, the box is on an Ubuntu server OS and I'm using apache2 as the app server. the changes I made to enable to virtual hosts were in the apache2.conf file with the <VirtualHost [IP address]> ... </VirtualHost> tags. Thanks.

    Read the article

  • VMware postfix server drops connection

    - by nicoX
    Our physical server godzilla forwards mails to our virtuall VMware server b4. They are on the same net. Often connection drops, we can't ping godzilla with our b4. That means mails from godzilla won't reach b4 and the mails will be in handed into the mailq. Sometimes it takes some hours and the issue will auto fix itself, b4 will wake up and the mail will be delivered. Another thing if we remotely ssh into the b4, the b4 will wake up and and receive any mailq mails from godzilla and deliver them. netadmin@b4:/var/log$ arp -a ? (192.168.209.80) at 00:1E:C9:AE:79:9D [ether] on eth0 root@godzilla:/usr/local/bin# arp -a ? (192.168.209.20) at 00:50:56:91:7d:b2 [ether] on eth0

    Read the article

  • Hp Procurve Switch : port filtered

    - by user117140
    My HP Procurve switch is blocking port 22 and I dont know how to unblock it.Please let me know From the server, see port 22 is blocked [root@server ~]#nmap -p22,80,443 10.247.172.70 Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2012-04-16 14:12 IST mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns_servers Interesting ports on 10.247.172.70: PORT STATE SERVICE 22/tcp filtered ssh ------------------> see 80/tcp filtered http 443/tcp filtered https This is blocked on cisco switch but I dont have any clue how this is done. I know that vlan is configured on switch. vlan 54 ip ospf 10.247.172.65 area 0.0.0.10 vrrp vrid 54 owner virtual-ip-address 10.247.172.65 255.255.255.192 priority 255 enable exit exit Please let me know how to unblock ssh port 22 access on this switch?

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

  • htaccess - Redirects with more than 1 level deep not working

    - by barfoon
    Hey everyone, Just moved to shared hosting on GoDaddy and Im trying to get my .htaccess rules working. Heres what I have: ErrorDocument 404 /error.php Options FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www\.mydomain\.org$ RewriteRule ^(.*)$ http://mydomain.org/$1 [R=301,L] RewriteRule ^view/(\w+)$ viewitem.php?itemid=$1 [R=301,L] RewriteRule ^category/(\w+)$ viewcategory.php?tag=$1 [R=301,L] RewriteRule ^faq$ faq.php RewriteRule ^about$ about.php RewriteRule ^contact$ contact.php RewriteRule ^submit$ submit.php RewriteRule ^contactmsg$ handler-contact.php All the pages @ the root of the domain seem to be working i.e mydomain.org/faq, mydomain.org/about are working. But whenever I try mydomain.org/category/somecategory, I get a 404. How can I fix my .htaccess to obey these rules that are more than 1 level deep? Thanks,

    Read the article

  • Why's SMC failing on startup?

    - by Brian Knoblauch
    Trying to remove a user from one of our servers, but I seem to be thwarted at every turn... SMC refuses to load the user list (failing with a NoClassDefFoundError in the listAll method of UserContent). vipw just returns with "vipw: /etc/passwd file busy". I'm the only user on the system at the moment (it's our backup SRSS box), and both of these fail even right after a reboot. I don't have console access at the moment either unfortunately (or I would try single user mode). Of course, even if init mode S worked and let me do this one task, it doesn't solve the root problem. Ideas?

    Read the article

  • How to make Ubuntu useradd behave like Centos useradd?

    - by Buttle Butkus
    I don't remember modifying CentOS useradd to get this behavior. useradd in CentOS creates the user's home directory with all the normal files (like .bashrc). I modified /etc/default/useradd to make it looks like CentOS (just required some uncommenting) except for Ubuntu having SHELL=/bin/sh instead of SHELL=/bin/bash How do I make useradd act like it does in CentOS? Is there some existing option to change? Or should I just add an alias to /etc/bash.bashrc? The difference: On Ubuntu, useradd is not creating the home directory. as root: $ useradd test $ cd ~test -su: cd: /home/test: No such file or directory

    Read the article

  • How to remotely install Linux via SSH?

    - by netvope
    I need to remotely install Ubuntu Server 10.04 (x86) on a server currently running RHEL 3.4 (x86). I'll have to be very careful because no one can press the restart button for me if anything goes wrong. Have you ever remotely installed Linux? Which way would you recommend? Any advice for things to watch out? Update: Thanks for your help. I managed to "change the tires while driving"! The main components of my method are drawn from HOWTO - Install Debian Onto a Remote Linux System, grub legacy: Booting once-only, grub single boot and kernel panic reboot , and Ubuntu Community Documentation: InstallationFromKnoppix Here is the outline of what I did: Run debootstrap on an existing Ubuntu server Transfer the files to the swap partition of the RHEL 3.4 server Boot into tha swap partition (the debootstrap system) Transfer the files to the original root partition Boot into the new Ubuntu system and finish up the installation with tasksel, apt-get, etc I tested the method in a VM and then applied to the server. I was lucky enough that everything went smoothly :)

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >