Search Results

Search found 13995 results on 560 pages for 'home'.

Page 112/560 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • How to selectively route network traffic through VPN on Mac OSX Leopard?

    - by newtonapple
    I don't want to send all my network traffic down to VPN when I'm connected to my company's network (via VPN) from home. For example, when I'm working from home, I would like to be able to backup my all files to the Time Capsule at home and still be able to access the company's internal network. I'm using Leopard's built-in VPN client. I've tried unchecking "Send all traffic over VPN connection." If I do that I will lose access to my company's internal websites be it via curl or the web browser (though internal IPs are still reachable). It'd be ideal if I can selectively choose a set of IPs or domains to be routed through VPN and keep the rest on my own network. Is this achievable with Leopard's built-in VPN client? If you have any software recommendations, I'd like to hear them as well.

    Read the article

  • Allow certain users to access a specific directory?

    - by animuson
    I'm trying to figure out how to allow certain users who are also me to access a directory of files that I want to use for all of my users. I'm using cPanel and I used WHM to create three separate accounts. The files I want to use are on account1 in the directory /home/account1/public_html/source/engines and I want the directory /home/account2/public_html/source/engines to use the same exact files without having to upload them to both places every time I change them, so I created a simple symbolic link and added account2 to the group account1 (while still keeping its own group as the primary). It still gives me a Permission Denied error though. Is there any way I can grant account2 and other accounts that I create for myself access to those files? I don't want them to be global to all users because I don't want my hosted users to be able to access them, only my users. groups account1 returns account1 : account1 groups account2 returns account2 : account2 account1 /home/account1/public_html/source/engines and all its files belongs to account1:account1 Any other information you might need just ask.

    Read the article

  • how to change document root to public_html from root directory

    - by manish
    For testing I hosted my website on free server from 000webhost.com They have a directory structure:- (root folder) \ (public folder) \public_html this directory structure enables to keep all the library files in root folder and all public data in \public_html, so I developed my website accordingly, and my final structure looked like:- / /include(this folder contains library files) /logs(log files) /public_html /public_html/index.php /public_html/home.php /public_html/and other public files on 000webhost makes only public_folder available to be accessed via url and my url looked neat and clean like www.xample.com/index.php or www.example.com/home.php but after completion of development I moved website to shared host purchased from go-daddy.com, now they do not have any such kind of directory permission, all the files are kept in root folder and are accessible via url also url has become like:- www.example.com/public_html/home.php or www.example.com/public_html/index.php How should I redirect url request to public_html folder again so as to make library file unavailable to public access and make url neat and clean.

    Read the article

  • Changing linksys router configuration from command-line

    - by Dan
    I am constantly logged into (ssh'd) my home machine (ubuntu) from various remote locations. Sometimes I would like to change my home linksys router settings (change the port forwarding settings or disable/enable wireless, things like that). When I try and use the links2 text browser, there isn't much I can do because the tab titles don't show up (presumably because they are pictures?). Is there another way of configuring a linksys router from a command line? I guess my only other option is to set up a proxy on my home machine and use a browser connected to that proxy to configure it, but I would think there might be a non-browser way of doing it. Thanks

    Read the article

  • I just moved, have no internet on my laptop or ipod, but everyone else who lives here does

    - by Kay
    I just moved to Thunder Bay and my laptop as well as my ipod cannot connect to the internet. My laptop allows me to write the password for the wifi, but I still have no internet connection. When I try to use the cable, the computer tells me I have a perfect connection, even the icon shows that it's working, but I can't open any web pages or use any internet functions. When I try to use MSN it sends me to a troubleshooting option and informs me that there's some kind of problem with the "gateway". I have unplugged the modem and the router and plugged them back in, this did not help. I am living in a home where all the people are using wifi on the same system as me, and no one has ever had any problems. Back home both my laptop and my ipod worked without a problem both in my home, as well as on campus. Since this problem seems to be limited only to me, it would indicate that there's a problem on my end -- with my laptop. However, in that case my ipod would be working. It has never failed to connect before. Any suggestions would be much appreciated.

    Read the article

  • Serving static files fails - nginx

    - by Sergei
    Hi, I've been looking and trying around all night, but without success. I configured nginx to serve my static files and proxy all the other traffic: server { listen 80; server_name mydomain.com; access_log /home/boudewijn/www/bbt/brouwers/logs/access.log; error_log /home/boudewijn/www/bbt/brouwers/logs/error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/boudewijn/www/bbt/brouwers/; } } The proxy passing is no problem, but when I go to mydomain.com/media/ or try to access any testfile over there, it's without success. I paid attention to the difference between root and alias, my media folder exists, I paid attention to the trailing slashes, but still I get a 404 when trying to access my static media files. Any help?

    Read the article

  • How to disable "N" Wireless Mode RTL8192 (Thinkpad Edge 15 Core i5) in natty

    - by Gustavo Rubio
    I've seen many owners of thinkpad edges which are supossed to be linux-friendly having problems with wireless adapter. I've found several links inside askubuntu and in ubuntuforums which have a lot of work-arounds for those problems, mine seems to be wierd though. I use my laptop on both my office and at home. At home I have a router which is A/B/G and here at home the wireless connection works just fine, using a WEP key. But in work I have a B/G/N wireless router and it doesn't work, my guess is that this adapter works with N modes but somehow this is buggy in the bundled driver in natty. I've tried to disable the "N" mode in the router but that didn't work. Later I went to realtek website, downloaded their driver and compiled myself, kinda seems to work most of the time but sometimes some websites keep trying to load or load just parts of it and images start to look like their links are broken and so on, much like what you get when you were loading a page and suddenly the connection is lost. This problem, as I said, is only using the realtek driver from their website. Dmesg gives me this a lot of these: [ 5869.049454] rtl8192se_update_ratr_table: ratr_index=0 ratr_table=0x00000ff5 [ 5879.240563] DHCP pkt src port:68, dest port:67!! So I thougth I might as well switch back to the original driver which seems to work just fine on A/B/G wireless networks but not on N networks so if anybody knows how to disable that mode from within the driver please let us know :) Thanks in advance! PS: I do found a link to a similar question and it was answered but let me remind you I'm NOT using the intel version of wireless for my thinkpad but the realtek (RTL8192SvB)

    Read the article

  • Session management error: None of the authentication protocols specified are supported

    - by JBWhitmore
    The title is the first error that has sent me on a mission to fix things. Motivation: I was trying to install the new Enthought Python Distribution -- when the error above first showed up. The install finished -- and looked like there were a few more times it flagged dcopserver problems: Please check that "dcopserver" program is running! Could not read network connection list: ~/home/user/.DCOPserver_host__0 When running ipython from the distribution, it claims that readline (the ability to up arrow in history or tab-complete) is not available for my system. It is though -- if I run the ipython that's sitting in /usr/bin/ipython all readline features work perfectly. So, I tried to fix the install by trying to fix what I thought could be causing the problems. Bad things that are happening that I want to be fixed: When restarting I get the error: Could not update ICEauthority file /home/username/.ICEauthority. ipython readline doesn't work with Enthought's ipython Things I have tried: changed the owner of my ~/.ICEauthority to be me. changed the owner of home directory (and all nested files and folders) to be me double checked that /var/lib/gdm was owned by Gnome (yep) attempted to reinstall DCOP, kbuildsycoca stuff (fail) I've removed nautilus; rebooted; reinstalled; rebooted; removed ubuntu-desktop; rebooted; reinstalled; rebooted. Any suggestions on how to fix the Bad Things that are happening would be greatly appreciated! Computer: Ubuntu 10.04 x86

    Read the article

  • Virtualmin deactivating PHP on new virtual servers

    - by Josh
    This is related to my other question... but the situation is much worse now. After updating to the most recent version of Virtualmin, when I create new accounts, Virtualmin sets up their VirtualHost entries as follows: <Directory /home/username/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/username/fcgi-bin/php.fcgi .php </Directory> <Directory /home/username/cgi-bin> allow from all </Directory> [...] RemoveHandler .php Now, not only is it specifically inserting AddHandler fcgid-script and FCGIWrapper... which I do not want because I am using mod_fastcgi, but it's also setting up PHP in such a way that it will never work! It's adding a RemoveHandler .php after setting up the handler for PHP! Where is this behavior configured and how can I stop it? Better yet, how can I make Virtualmin not include any PHP commands at all in the VirtualHost section?

    Read the article

  • /tmp shows 690 Mb full, actual size 72 K, Why?

    - by Ankit
    Why is /tmp diretory on my system showing 690 Mb full, whereas du -sh /tmp shows only 72K full. drwxrwxrwt 2 lightdm lightdm 4096 Aug 29 21:49 at-spi2 drwx------ 2 ankit ankit 4096 Aug 29 21:50 keyring-0JTfoY drwx------ 2 ankit ankit 4096 Aug 29 21:44 keyring-rChLLL drwx------ 2 root root 16384 Jul 22 02:10 lost+found drwx------ 2 ankit ankit 4096 Jan 1 1970 orbit-ankit drwx------ 2 lightdm lightdm 4096 Aug 29 21:50 pulse-2L9K88eMlGn7 drwx------ 2 root root 4096 Aug 29 21:44 pulse-PKdhtXMmr18n drwx------ 2 ankit ankit 4096 Aug 29 21:50 pulse-zR1TZUAZfmQW drwx------ 2 ankit ankit 4096 Aug 29 21:44 ssh-dlslOXOq2203 drwx------ 2 ankit ankit 4096 Aug 29 21:50 ssh-MrQQVRyy3316 -rw------- 1 ankit ankit 0 Aug 29 21:45 tmp0qnNG4 -rw------- 1 ankit ankit 0 Aug 29 21:50 tmpVvSMt6 -rw------- 1 ankit ankit 0 Aug 29 21:49 tmpy9Gadz -rw-rw-r-- 1 lightdm lightdm 0 Aug 29 21:44 unity_support_test.0 ankit@duster:/tmp$ df -h df: `/home/ankit/.gvfs': Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on /dev/sda1 79G 11G 65G 14% / udev 2.9G 4.0K 2.9G 1% /dev tmpfs 1.2G 868K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 220K 2.9G 1% /run/shm /dev/sda7 38G 690M 35G 2% /tmp /dev/sda5 93G 26G 63G 30% /home /dev/sda6 93G 1.6G 87G 2% /boot /dev/sda3 154G 69G 78G 48% /home/mount_150 ankit@duster:/tmp$ ankit@duster:/tmp$ ankit@duster:/tmp$ sudo du -sh /tmp/ 72K

    Read the article

  • Renting linux server just to make backups of my personnal data ?

    - by Matthieu
    Hi all, I would like to be able to backup ALL my computers data on a Linux server. For now, I have a home server, but soon I will be travelling, without home (so no home server). I was thinking of renting a dedicated linux webserver, but this is expensive, and I don't need a fast machine "web-oriented" with mysql server and all, I just need a full SSH access (full control, and then I install my programs). Does "backup servers" exist ? Am I doing it wrong (maybe that is not a good solution) ? Note : I run Mac OS, Windows and Linux, I backup through rsync, I want full control on my backup, not an automated "magic" backup like MobileMe or anything like that. Edit : I need around 500Gb storage

    Read the article

  • best way to enlarge system partition

    - by yuvi
    I have a problem - I need to enlarge my system partition. I mean - when I initially installed Ubuntu, I split the partition so I have 15GB for system and the rest (around 400) pointed at /home/. This is very useful if anything goes wrong someday and I want to format and completely re-install Ubuntu without losing any of my actual data. The problem is, 15GB isn't enough, so it seems. I already moved /var/ and /opt/ folder to /home/, adding symlinks at root, but I'm still at 86% usage and I'm having performance issues (mostly when booting or running a VM). I can use Ubuntu on a flash drive and externally enlarge the partition, but I'm really afraid with going forward with that plan. Also, despite what I said before, I'd like to avoid re-installing the system if at all possible. Any advice, suggestions or ideas on how to best approach this? Any warnings I should heed? Thanks in advance! update Here's the gparted screenshot - as you can see, there's windows on dual boot (sda1-5 are all related to the windows system), then I have a linux swap, 14GB (so uh... not even 15) of system and 435 of for /home.

    Read the article

  • Sharing an external hard drive in Ubuntu using Samba

    - by cambraca
    /media/MYDISK is where my hard drive is mounted automatically. I created a symlink using: ln -s /media/MYDISK /home/camilo/MYDISK chmod 777 /home/camilo/MYDISK I'm setting up smb.conf like this: [myshare1] comment = external disk browsable = yes path = /home/camilo/MYDISK guest ok = yes read only = no create mask = 0775 Also, in the [global] section I tried adding the following lines: follow symlinks = yes wide links = yes unix extensions = no The problem is that when browsing the shared folder in Windows 7, I get a "\\etc\myshare1 is not accessible" error. When pointing the path to a regular folder it works fine. Also, when I point it directly to /media/MYDISK, it shows the same error. EDIT: to make it more interesting, I have no graphical interface, so I need to touch the config files directly..

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • Cut in excel doesn't work, and copying tables from one program to another returns text

    - by Kristina
    My excel 2007 on Windows 7 operating system seems to have a probelm with regular cut function. when I highlight cells I want to cut and press cut (either on keyboard shortcut Ctrl+x, Home menu cut command, or from the right-click menu) cells start flashing for a split second and after that they only turn normal. When I want to paste them, they past as if copy function was used. If I try to rightclick to use function "insert cut cells" it is not one of the offered options at all. On my home computer I have same combination, Excel 2007 on windows 7 and it works just fine. COuld the problem be due to 64-bit win7 version at my job, and 32-bit version at home? Another problem is when I copy table from excel to word, in word pasting results in unformatted text instead of table as it was in excel. Did someone have such problems and can offer a solution? Thanx a lot.

    Read the article

  • Setting up mutt for gmail

    - by highBandWidth
    I am trying to set up mutt for gmail. I am following instructions at http://crunchbanglinux.org/wiki/howto/howto_setup_mutt_with_gmail_imap, however, after putting set from = "[email protected]" set realname = "Your Real Name" set imap_user = "[email protected]" set imap_pass = "yourpassword" (with my details, of course), I get $ mutt Error in $HOME/.muttrc, line 12: imap_user: unknown variable Error in $HOME/.muttrc, line 13: imap_pass: unknown variable source: errors in $HOME/.muttrc Press any key to continue... If I try to send an email, it doesn't work because instead of IMAP, it tries to send email directly from my localhost's mail system. Mutt says it is version Mutt 1.4.2.3i.

    Read the article

  • change directory automatically on ssh login

    - by Gareth
    Hi, I'm trying to get ssh to automatically change to a particular directory when I log in. I tried to get that behaviour working using the following directives in ~/.ssh/config: Host example.net LocalCommand "cd web" but whenever I log in, I see the following: /bin/bash: cd web: No such file or directory although though there is definitely a web folder in my home directory. Even using an absolute path gives the same message. To be clear, if I type cd web after logging in I get to the right folder. What am I missing here? EDIT: Different combinations of quotes/absolute paths give different error messages: LocalCommand "cd web" /bin/bash: cd web: No such file or directory LocalCommand cd web /bin/bash: line 0: cd: web: No such file or directory LocalCommand cd /home/gareth/web /bin/bash: line 0: cd: /home/gareth/web: Input/output error This makes me think that the quotes shouldn't be there, and that there's another error happening.

    Read the article

  • Securely automount encrypted drive at user login

    - by Tom Brossman
    An encrypted /home directory gets mounted automatically for me when I log in. I have a second internal hard drive that I've formatted and encrypted with Disk Utility. I want it to be automatically mounted when I login, just like my encrypted /home directory is. How do I do this? There are several very similar questions here, but the answers don't apply to my situation. It might be best to close/merge my question here and edit the second one below, but I think it may have been abandoned (and therefore never to be marked as accepted). This solution isn't a secure method, it circumvents the encryption. This one requires editing fstab, which necessitates entering an additional password at boot. It's not automatic like mounting /home. This question is very similar, but does not apply to an encrypted drive. The solution won't work for my needs. Here is one but it's for NTFS drives, mine is ext4. I can re-format and re-encrypt the second drive if a solution requires this. I've got all the data backed up elsewhere.

    Read the article

  • /etc/resolv.conf nameserver fd00::1

    - by user88631
    My /etc/resolv.conf constantly get a mysterious entry, i run a home network with ipv6 provided by ravd, the interface is auto-configured by Network manager (all name server lookups are lost when this line is first in my /etc/resolv.conf) . Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) **# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN** nameserver fd00::1 nameserver 192.168.1.1 search home.int When ping is working cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.1.1 search home.int So something is putting fd00::1 at start of file, not if I ping6 fd00::1 I get Destination unreachable: Administratively prohibited To diagnose this I ran the router with single cable to connected to ubuntu machine. Ran tcpdump + restarted network on ubuntu. "tcpdump ip6 -e -i eth0 | grep fd00" finds nothing, it's not being advertised via the network.. The only hit I got was when an upstream router refused a connection attempt from the ubuntu machine to fd00::1. I have also switched on debug for network manager & it appears to set the mystery line.. 15:22:14 storage-pc NetworkManager[349]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. 15:22:14 storage-pc NetworkManager[349]: <warn> dnsmasq exited with error: Other problem (5) 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281528] [nm-dns-manager.c:598] update_dns(): updating resolv.conf 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281875] [nm-dns-manager.c:719] update_dns(): DNS: plugin dnsmasq ignored (caching disabled) 15:22:14 storage-pc NetworkManager[349]: <info> ((null)): writing resolv.conf to /sbin/resolvconf 15:22:14 storage-pc dbus[2184]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' 15:22:14 storage-pc dnsmasq[2875]: reading /etc/resolv.conf 15:22:14 storage-pc dnsmasq[2875]: using nameserver 192.168.1.1#53 15:22:14 storage-pc dnsmasq[2875]: using nameserver fd00::1#53 Any suggestions on how to find out where this comes from?

    Read the article

  • File doesn't exist in Linux although it's located in Terminal

    - by Mazen Ayman
    I'm a bit new to unix/linux environment, but I have a small problem. I'm using "locate" to find the path of a file I need, it gives me the path for it, but the file doesn't exist in that path, like that: locate test1.txt /home/user/test files/text1.txt /home/user/test1.txt~ "test files" directory is where I was keeping the file and I copied it to the home directory once but I deleted it, no idea what it keeps telling me there is still a tmp file for it. it worth mentioning that I used the command: locate test1.txt~ |xargs -n1 rm to remove that tmp file, but maybe that what caused the problem. I tried to show hidden files, and check for temp files, didn't find it either. any clue what happened?

    Read the article

  • Correct password for ssh key rejected when ssh-d into machine

    - by user20342
    When I am logged into my machine directly, I can do all git operations, and when prompted for a password, the password is accepted. When I ssh into the same box and run git operations on the same repos, the password is rejected. Relevant section of .ssh/config looks like this: # Generic settings Host * ServerAliveInterval 600 ControlPath /tmp/ssh-%r@%h:%p ControlMaster auto KeepAlive yes IdentityFile ~/.ssh/id_rsa.pub Transaction looks like this when I login when I ssh into my box: {12-12-03 9:41}hbrown-wks2:~/workspace/spt/project@master??? hbrown% git pull Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Using bash does not appear to make a difference (i.e. ssh-agent /bin/bash). This is a recent development, but I can't cite the change that caused it.

    Read the article

  • Script to recursively grep data from certain files in the directory

    - by Jude
    I am making a simple shell script which will minimize the time I spend in searching all directories under a parent directory and grep some things inside some files. Here's my script. #!/bin/sh MainDir=/var/opt/database/1227-1239/ cd "$MainDir" for dir in $(ls); do grep -i "STAGE,te_start_seq Starting" "$dir"/his_file | tail -1 >> /home/xtee/sst-logs.out if [ -f "$dir"/sysconfig.out]; then grep -A 1 "Drive Model" "$dir"/sysconfig.out | tail -1 >> /home/xtee/sst-logs.out else grep -m 1 "Physical memory size" "$dir"/node0/setupsys.out | tail -1 >> /home/xtee/sst-logs.out fi done The script is supposed to grep the string STAGE,te_start_seq Starting under the file his_file then dump it sst-logs.out which it does. My problem though is the part in the if statement. The script should check the current directory for sysconfig.out, grep drive model and dump it to sst-logs.out if it exists, otherwise, change directory to node0 then grep physical memory size from setupsys.out and dump it to sst-logs.out. My problem is, it seems the if then else statement seems not to work as it doesn`t dump any data at all but if i execute grep manually, i do have data. What is wrong with my shell script? Is there any more efficient way in doing this?

    Read the article

  • Wildcard subdomain to file htaccess

    - by Mikkel Larson
    I've have a problem with a htaccess wildcard redirect My base configuration is set to work with: www.domain.com and domain.com this is governed by 2 .htaccess files: 1: /home/DOMAIN/public_html/.htaccess AddDefaultCharset utf-8 RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?festen.dk$ RewriteCond %{REQUEST_URI} !^/public/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /public/$1 RewriteCond %{HTTP_HOST} ^(www.)?festen.dk$ RewriteRule ^(/)?$ public/index.php [L] 2: /home/DOMAIN/public_html/public/.htaccess AddDefaultCharset utf-8 <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine On </IfModule> <IfModule mod_rewrite.c> RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> Now i want to redirect: www.[SUBDOMAIN].domain.com/[PATH] and [SUBDOMAIN].domain.com/[PATH] to public/index.php/subdomaincontroller/realsubdomain/[PATH] My solution so far: Added following to 2: /home/DOMAIN/public_html/public/.htaccess <IfModule mod_rewrite.c> RewriteCond %{HTTP_HOST} !www.domain.com$ [NC] RewriteCond %{HTTP_HOST} ^(www.)?([a-z0-9-]+)domain.com [NC] RewriteRule (.*) subdomaincontroller/realsubdomain/%2/$1 [L] </IfModule> Sadly this dows not work. Can anyone help me please?

    Read the article

  • Constantly diminishing free space on fedora 17

    - by Varun Madiath
    I don't know how to explain this other than to say that my computer seems to magically run out of free when it runs for a while. The output of df -h . oh my home direction is below /dev/mapper/vg_vmadiath--dev-lv_home 50G 47G 0 100% /home When I run sudo du -cks * | sort -rn | head -11 on /home I get the following output. I got this from decreasing free space on fedora 12 32744344 total 32744328 vmadiath 16 lost+found If I restart my system things seem to fix themselves and I'm left with about 20 or 25GB of free space. I'm running XFCE with XMonad as my window manager under fedora 17. Programs I'm running include the XFCE terminal, grep, find, firefox, eclipse, libre-office writer, zsh, emacs. Any help will be greatly appreciated. I'll gladly give you any other output you might need.

    Read the article

  • Windows 7 Professional N needs to be just Windows 7 Professional.

    - by Jess
    I have a laptop at work that originally had Windows 7 Home Premium on it. We have a tech that comes in a few times a week to do some of our support work, and we asked him to upgrade the laptop to Windows 7 Professional. Before he left he told us the upgrade didn't work and that we'd have to order a disk. Upon checking the computer it seemed he had upgraded to Windows Professional 7 N. It had not previously been Home Premium N, so I'm not exactly sure how he managed to upgrade it to an N edition. I do not understand why he didn't run the Any Time Upgrade, but that is now irrelevant. How can I change Professional N to regular Professional? I would like to avoid having to restore it back to Home if possible.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >