Search Results

Search found 13128 results on 526 pages for 'square root'.

Page 343/526 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • nginx rewrite base url

    - by ptn777
    I would like the root url http://www.example.com to redirect to http://www.example.com/something/else This is because some weird WP plugin always sets a cookie on the base url, which doesn't let me cache it. I tried this directive: location / { rewrite ^ /something/else break; } But 1) there is no redirect and 2) pages start shooting more than 1,000 requests to my server. With this one: location / { rewrite ^ http://www.example.com/something/else break; } Chrome reports a redirect loop. What's the correct regexp to use?

    Read the article

  • I screwed up, exit in .bashrc

    - by camel_space
    I put "exit" in my .bashrc file. I don't have physical access to the machine so to connect to it I use ssh. I don't have root privileges. Every time I connect to the server, the connection automatically closes. So far, I've tried: Overwriting .bashrc with scp and sftp. The connection closes before I can do anything. Using a few different GUI programs to access ssh (connection closes) Overwriting the file with ftp. (can't use ftp) From my home computer $ ssh host "bash --noprofile --norc" (connection closes) $ ssh host "mv .bashrc bashrc_temp" (connection closes) $ ssh host "rm .bashrc" (same thing) $ ssh host -t (connection closes) Is there anything I can do to disable .bashrc or maybe overwrite the file before .bashrc is sourced?

    Read the article

  • Sharepoint 2007 reset permission inheritance

    - by e-mre
    I have this SharePoint 2007 document library which has several levels of folders and files. Some folders in the middle of the hierarchy do not inherit permissions from their parents and have their unique permissions defined. It is a huge library and there are many folders like this. I am currently changing the permission model of my library and I want to reset all those unique permissions and have all of them inherit permissions from the library root. (Something like "Replace child object permissions" checkbox available in windows files system security window) If this is not possible, seeing a list of folders that have their unique permissions defined would also do.

    Read the article

  • Is there an RSM replacement in Windows 7? (to eject USB and Firewire devices from command line)

    - by Jay
    I know Windows XP and previous had an rsm.exe file in the <root>\windows\system32 folder that could be used to manage media. I used this command in batch files and from the command line to eject external disks (iPod, CF cardreader, etc.). This utility appears not to be included in Windows 7, and I wonder if there is some replacement utility that will allow the same thing. I've been unable to find any such thing. I'm aware of the 3rd party utilities; this question is only about what is included with Windows 7.

    Read the article

  • How to Extending a logical volume in WMWare

    - by Mercer
    down vote favorite i have a CentOS 6.3 into my Virtual Machine. I have 2 Disk: Disk#1 = 18G Disk#2 = 20G [root@vm ~]# df -h Filesystem Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_system-lv_root 1008M 250M 708M 27% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 31M 154M 17% /boot /dev/mapper/vg_system-lv_home 504M 17M 462M 4% /home /dev/mapper/vg_system-lv_opt 2.0G 68M 1.9G 4% /opt /dev/mapper/vg_produits-lv_grid 6.9G 2.5G 4.1G 38% /opt/grid /dev/mapper/vg_produits-lv_oracle 6.9G 144M 6.4G 3% /opt/oracle /dev/mapper/vg_system-lv_tmp 2.8G 71M 2.6G 3% /tmp /dev/mapper/vg_system-lv_usr 2.5G 1.6G 799M 67% /usr /dev/mapper/vg_system-lv_var 2.0G 278M 1.6G 15% /var So i want to extend my /tmp and my /opt/oracle like this: 10Go in/tmp 13Go in /opt/oracle Thx.

    Read the article

  • Wireless slow, but very odd

    - by Logman
    I have a HP Pavilion dv6 laptop, internet through the wireless timeouts. Even though it connects fine to the AP/router. The computer had a virus (searchnu?) and I backed up the usual and restored the laptop to factory image. Problem is after the restore the internet through the wireless the same still... and connected via wired and everything worked fine. I was able to update the whole system. Internet worked perfect, speed great. But the wireless still was pitiful. Ping tests with wireless are interesting: Google= 18ms Gmail= 18ms Yahoo= 1023ms 8.8.8.8=30ms Microsoft= Request Timed Out Bing= Request Timed Out msn= Request Timed Out even though I get 18ms with google, the pages take a long time to load. Is this a root kit? is it the wireless card in the laptop?

    Read the article

  • Running php and java in parallel on the same server

    - by manni
    I have got a java server from Rackspace. and I am already running a java application on the server. Now I want to run a php application on the same server. What should I do? When I asked Rackspace people, they said, apache is already installed on the server so I can run the php on it. I have also tried installing php on the server and then copied my php files in var/www/xxx but when I hit the url it is saying giving the page not found error. They have given me the ssh server root username and password. Thanks in advance.

    Read the article

  • How to configure a static wildcard subdomain with dnsmasq.

    - by Prody
    I have a network behind a NAT with a few machines. The machines are: router - NAT, dnsmasq, forwarding - directly connected to the inet server - which runs ssh, www and some other stuff clients - which do stuff on server I also have mydomain.com. server.mydomain.com is pointing to my connection's IP (single IP), which is the router, which forwards ports to server. Server, has a httpd running, which serves different sites based on vhosts. So I have site1.server.mydomain.com, site2.. The problem is that all the traffic is going thru the router, and when I check logs I always see the router's IP for everything (so it's hard to see who is running the script with the while(1)). I would just ServerAlias site1.server.local, but most of the sites have a root URL saved somewhere on top of which other URLs are built, so I can't do that. The solution for me would be telling dnsmasq somehow to answer to *.mydomain.com with server's IP. Is this possible somehow?

    Read the article

  • Redirect traffic to local address so iOS speedtest app measures LAN speed

    - by ivan_sig
    I have mounted a Speedtest Mini server on a local LAMP, so I can test my LAN speeds effortlessly just by opening the URL with a Flash enabled web browser, the thing is, I want my iOS and Android devices to test with the LAN server too, not with the WAN, as I'm trying to measure LAN-Only performance. Is there a way so I can redirect the traffic intended to an specific external IP (The one of the real server) to my local server?. I know the servers IP as a short Wireshark analysis gave me the data, but still searching for a way to make that redirect. I have Jailbreak and root on my devices, so playing with system files is not a problem. I've tried mounting a proxy and making redirects by the hosts file and domain names, but it looks like Ookla's app relies on IP address only.

    Read the article

  • reverse_proxy (mod_rewrite) and rails

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Duplication of Windows 7 Backup

    - by Steven Pickles
    I use the built in backup utility for Windows 7 because it's automated and flexible enough to allow me to schedule a daily shadow copy backup of particular files and folders directly to a separate internal RAID 0 array (2 x 1TB). It's also lightweight and stays out of the way. For off-site backup purposes, each week I copy the contents of the internal backup from the RAID 0 array to an external 1 TB drive. I then store move this drive to a different building. The copy from the internal backup to the external backup typically works like this: mount and erase contents of external drive highlight "file" on internal drive, hit CTRL+C CTRL+V on root directory of external drive Is there a better way to synchronize? Microsoft's SyncToy application does a pitiful job, and often leaves the folders not truly synchronized... which completely defeats the ability to use the backup's restore feature.

    Read the article

  • Safety concerns on allowing connections to MySQL with no password on localhost?

    - by ÉricO
    In the case of a Linux system, is there any security concern to let MySQL users with standard privileges (that is, not the root users) connect to the database with no password from localhost? I think that enforcing a password even for localhost can add a layer of protection, since, with no password the database access would be compromised if the SSH access is itself compromised. Considering that, would it be less safe to allow no password connection to MySQL than having the same password for SSH and for MySQL? I don't know if that is to be taken into account, but we also use phpMyAdmin to let users administrate their own database. I am asking because I kinda dislike having to put our database passwords unencrypted in the source or configuration files of our applications, where they can easily be leaked unintentionally. Since our servers are configured to run our applications as the Linux user the application belongs to, I was considering allowing no password from localhost as a simple solution. So, would that be a very bad idea or not?

    Read the article

  • How to tell start-stop-daemon to update $HOME and $USER accordingly to --chuid parameter

    - by iElectric
    I'm trying to run a service that uses $HOME and $USER environment variables. I could set them in service itself, but that would only be a temporary solution. Let's say I have a script test.sh with following content: echo $USER And I run it with start-stop-daemon to see my results: $ start-stop-daemon --start --exec `pwd`/test.sh --user guest --group guest --chuid -guest root Seems like it does not update environment, maybe that should be reported as a bug? I have found a nasty hacky solution, which only works (for unknown reason) on my this simple use case: $ start-stop-daemon --exec /usr/bin/sudo --start -- -u guest -i 'echo $USER' guest I'm sure someone else stumbled upon this, I'm interested in clean solution. $ start-stop-daemon --version start-stop-daemon 1.13.11+gentoo

    Read the article

  • Linux File Permissions & Access Control Query

    - by Jason
    Hi, Lets say I am user: bob & group: users. There is this file: -rw----r-- 1 root users 4 May 8 22:34 testfile First question, why can't bob read the file as it's readable by others? Is it simply that if you are denied by group, then you are auto-blacklisted for others? I always assumed that the final 3 bits too precedence over user/group permission bits, guess I was wrong... Second question, how is this implemented? I suppose it's linked to the first query, but how does this work in relation to Access Control, is it related to how ACLs work / are queried? Just trying to understand how these 9 permission bits are actually implemented/used in Linux. Thanks alot.

    Read the article

  • How do I get these permissions working right so Apache can work with the files?

    - by cosmicbdog
    I am having a go at setting up my own Apache and can't seem to get my head around the permissions. Lets say I grab a file from somewhere off the web and it has permission of 600. I then upload this file via ftp to a user directory, which is also an apache virtual site, and so this file retains this permission of 600. This means that the user can read this file, but Apache can't: it will be forbidden. What is the most simple solution so that apache can read + write whatever files end up in the users directory? Can apache be granted some sort of root power over files in a directory?

    Read the article

  • How do you interpret `strace` on an apache process returning `restart_syscall`?

    - by indiehacker
    We restart an apache server every day because RAM usage reaches its limit. Though of value See this serverfault answer, I dont think lowering the MaxClients in the apache configuration is a solution to the unknown root problem. Can you make sense out of the below data? Below is an extract of what $top with M returns: 20839 www-data 20 0 1008m 359m 22m S 4 4.8 1:52.61 apache2 20844 www-data 20 0 1008m 358m 22m S 1 4.8 1:51.85 apache2 20842 www-data 20 0 1008m 356m 22m S 1 4.8 1:54.60 apache2 20845 www-data 20 0 944m 353m 22m S 0 4.7 1:51.80 apache2 and then investigating a single process with $sudo strace -p 20839 returns only this one line, which is cryptic, for me: restart_syscall(<... resuming interrupted call ...> <unfinished ...> Any insights? Thanks.

    Read the article

  • What are 'damaged files' on external hard drive (HFS format for OS X)?

    - by dtlussier
    I have an external HD formatted to default HFS (Mac OS Extended - Journaled) and very once and a while I get a folder called DamagedFiles in the root of the volume. The folder contains a collection of links to files on the drive. In general the files seem fine as I am for example able to open the images or text files without a problem. Is this serious? What can I do to fix this problem? Any advice would be great as I couldn't find anything on here or via Google that addressed this problem in particular. Many thanks.

    Read the article

  • Can't boot Windows after installing Linux

    - by user4035
    I have a partition /dev/sdb1, where my old Windows XP resides. All the files are there intact and I can see them, mounting the disk from Linux. Linux is on /dev/sdb2. But when I choose Windows in LILO prompt, it doesn't load. I have the following lilo.conf: boot = /dev/sdb # Linux bootable partition config begins image = /boot/vmlinuz root = /dev/sdb2 label = Linux read-only # Partitions should be mounted read-only for checking # Linux bootable partition config ends # Windows bootable partition config begins other = /dev/sdb1 label = Windows table = /dev/sdb # Windows bootable partition config ends What can be wrong?

    Read the article

  • Running 'dd' command at startup?

    - by Usman Ajmal
    Hi, I have set a script to run at Linux startup. The script contains a following line of code dd if=/dev/sda2 of=/dev/sda5 ?> result.txt Now, when my Linux Desktop appear, result.txt contain dd: opening '/dev/sda2': Permission denied If I prefix the dd command with sudo as: sudo dd if=/dev/sda2 of=/dev/sda5 ?> result.txt the result.txt contains sudo: no tty present and no askpass program specified Is there a way I can get around this problem? What I want is to copy 2nd parititon to 5th when a user logs in no matter if he is root, admin, Desktop or an unprivileged user. Thanks a lot as always.

    Read the article

  • scsi and ata entries for same hard drive under /dev/disk/by-id

    - by John Dibling
    I am trying to set up a ZFS pool using 4 bare drives which I have attached to my Ubuntu system via a SATA hot swap backplane. These are Hitachi SATA drives. When I list the contents of /dev/disk/by-id, I see two entries for each drive: root@scorpius:/dev/disk/by-id# ls | grep Hitachi ata-Hitachi_HDS5C3030ALA630_MJ1323YNG0ZJ7C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1064C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG190AC ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1DGPC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG0ZJ7C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1064C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG190AC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1DGPC I know these are the same drives because I wrote down the serial numbers, and all the other drives in this system are either Seagate or WD. The serial number for the first one, for example, is YNG0ZJ7C. Why are there two entries here for each drive? More to the point, when I create my ZFS pool which one should I use; the scsi- one or the ata- one?

    Read the article

  • Xorg configuration file on Debian Testing

    - by nubicurio
    I cannot find the Xorg configuration file on my newly installed Debian on my tablet-pc, so I followed this tutorial http://wiki.debian.org/Xorg and ran the command "Xorg -configure", to which I got the following error messages: (EE) Failed to load module "vmwgfx" (module does not exist, 0) (EE) vmware: Please ignore the above warnings about not being able to load module/driver vmwgfx (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" FATAL: Module fbcon not found. Number of created screens does not match number of detected devices. Configuration failed. Dose anyone know what this means and how I should proceed? Why is there a warning about vmware, and what is this fbcon module?

    Read the article

  • Nginx - Address already in use

    - by user2426362
    If I run: service nginx restart I have this error: root@user /etc/nginx/sites-enabled # service nginx restart Restarting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind() nginx. How to fix it? I have also apache conf running on port 80.

    Read the article

  • join ZFS/Solaris to windows AD 2003/2008 domain

    - by user95587
    I have a client trying to join his newly updated ZFS/Solaris box to my Windows AD 2003/2008 domain. Here is the command he is using and the error he is getting; Console: root@xxx:/etc/inet# smbadm join -u USER DOMAIN After joining DOMAIN the smb service will be restarted automatically.Would you like to continue? [no]: yes Enter domain password: Joining DOMAIN ... this may take a minute ... failed to join DOMAIN: UNSUCCESSFUL Please refer to the system log for more information. From /var/adm/messages: Sep 22 10:12:00 xxx smbd[593]: [ID 702911 daemon.error] smbrdr_exchange[116]: failed (-3) Sep 22 10:12:01 xxx smbd[593]: [ID 232655 daemon.notice] ldap_modify: Insufficient access Sep 22 10:12:01 xxx smbd[593]: [ID 898201 daemon.notice] Unable to set the TRUSTED_FOR_DELEGATION userAccountControl flag on the machine account in Active Directory. Please refer to the Troubleshooting guide for more information. Sep 22 10:12:01 xxx smbd[593]: [ID 526780 daemon.notice] Failed to establish NETLOGON credential chain Sep 22 10:12:01 xxx smbd[593]: [ID 871254 daemon.error] smbd: failed joining DOMAIN (UNSUCCESSFUL)

    Read the article

  • Safari 7 SSL error if using IP-adress

    - by K. Biermann
    I have created my own CA for internal usage and set the root certificate to trustworthy on my machines. With this CA I signed the SSL-certificates for my internal servers. I only address them with their IP and so I used the servers' IP as certificate name. If i connect to the Servers with Chrome or mobile Safari it works without problems, but if I use Safari 7 under Mavericks (on the same machine with the same keychain) i get the following error: "The certificate is not valid (host name mismatch)". I double checked that I entered the correct IP ("https://192.168.2.130"), but I always get the same error. Do I need to enter a different name for the certificate or is it just that Safari doesn't support SSL certificates for IPs? Here is a screenshot of the error message (I can only post images with at least 10 rep): Safari's error message Thanks in advantage and please excuse my bad English :D

    Read the article

  • How to downgrade a kernel?

    - by JATMON
    I need to downgrade the kernel from 2.6.32-358.6.2.el6.centos.plus.x86_64 to 2.6.32-220.el6.x86_64 I am unable to install the older version using Yum/rpm as it gives the following error root@localhost kernels]# rpm -i --ignoreos kernel-2.6.32-220.el6.x86_64.rpm warning: kernel-2.6.32-220.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 192a7d7d: NOKEY package kernel-2.6.32-279.el6.x86_64 (which is newer than kernel-2.6.32-220.el6.x86_64) is already installed package kernel-2.6.32-358.6.1.el6.centos.plus.x86_64 (which is newer than kernel-2.6.32-220.el6.x86_64) is already installed package kernel-2.6.32-358.6.2.el6.centos.plus.x86_64 (which is newer than kernel-2.6.32-220.el6.x86_64) is already installed I cant remove the currently running kernel , so whats the way out? Yum search doesnt even get me to this old version, so had to get the rpm from web. Any help is much appreciated.

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >