Search Results

Search found 15089 results on 604 pages for 'jeff home'.

Page 255/604 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • How do I restore tab-completion on shell variables on the bash command-line?

    - by Eric
    I've long set my most-recently visited directories to shell variables d1, d2, etc. On an ancient Fedora machine I could type a command like $ cp $d1/ and the shell would replace $d1 with text like /home/acctname/projects/blog/ and would then show me the contents of .../blog, like any tab-completion. Now, both ubuntu wheezy/sid and fedora 16 just -escape the '$', and naturally there are no completions to show. You can see this behavior in action in an OSX Terminal window. On 10.8, do something like ls $HOME/ to see what I mean. Is there a bash shell variable or option that can restore the old behavior? man bash suggests this is a bug: complete (TAB) Attempt to perform completion on the text before point. Bash attempts completion treating the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if the text begins with @), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. I get the above described completion when a token starts with '~' or a letter. It's just '$'-completion that's broken.

    Read the article

  • SSH agent forwarding on debian squeeze

    - by nfvindaloo
    Im trying to set up SSH forwarding like this osx debianA debianB I can connect to debianA fine, using ssh -A and it has the following env vars when i do: SSH_AGENT_PID=1543 SSH_AUTH_SOCK=/tmp/ssh-giwdYY1542/agent.1542 SSH_CLIENT='92.233.199.x 38954 22' SSH_CONNECTION='92.233.199.x 38954 108.171.179.x 22' SSH_TTY=/dev/pts/0 When i try to connect to debianB, the agent is not used! ssh -v output ends with: debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /home/nic/.ssh/id_rsa debug1: Trying private key: /home/nic/.ssh/id_dsa debug1: Next authentication method: password Then im asked for a password. I have not set any ForwardAgent no directives in ssh_config and dont have a .ssh/config at all. sshd_config has not got AllowAgentForwarding in it. I have tried all of these directives as yes also. debianA and debianB both have identical ssh_config and sshd_config (verified with diff) so the really weird thing is connecting OSX debianB debianA works fine!! Im totally out of ideas! Has anyone come across this before? Cheers! NFV

    Read the article

  • Slow upload, fast download on Windows 7 64bit system

    - by Malik
    I've got a weird problem in the download speeds on my desktop PC (Windows 7 Home Premium 64bit) are consistently fast (approx. 400kB/s) but uploads are very slow (around 6-10kB/s). This has been going on for the last 3 weeks or so. I am a very competent user and troubleshooter, and have searched online for 2 weeks for a solution, to no avail. Part of the problem is that internet is provided by WiFi by my landlord and I have no access to the router (BT Home Hub router) although I know for sure he wouldn't have the first idea on how to restrict my usage :) (rules that out) Anyway, I've tried: - various drivers (my Wifi 'card' is TP-link TL-WN851N, and I've tried TP-link + Atheros + Qualcomm Atheross drivers, suggested by Microsoft) - various tweaks to network parameters (e.g. as suggested by SpeedOptimser) - various tweaks to Windows 7 services (e.g. disabling/manual-ing unecessary services) - raising and lowering head onto a reasonably firm surface at moderate frequency (jk :D) None of the above have helped, and I'm officialy asking for help now!! Thanks for your time and effort in advance!

    Read the article

  • Step by step instructions to make dos "see" and "access" usb hard drives

    - by Gireesh Venkateswaran
    I am stuck with an USB external hard drive (Maxtor 1 Touch) 750GB that crashed. I have photos and home movies (my son's birth, first birthday etc) that are very important to me. I am given to understand that Spinrite is a very good tool to use, but It does not come "with out of the box" capabilities to access USB drives. If I open the case to get the HDD out from my External HDD, I would compromise the warrenty and I would not be able to exchange the Drive. I have done a bit of research and have the drivers that could help. But the bit I am missing is, How to put it all together. I would really appreciate it if some one can give me step by step instruction where I can create a dos boot cd that can load the drivers and assign a drive letter to it so that I can make Dos "See" the external hard drive. I have a Toshiba satellite laptop that runs Windows XP (Home). It does not have a floppy drive. I will be greatful to your help in the regard.

    Read the article

  • How to setup apache to catch a proxy_pass from nginx?

    - by Paté
    I have a working apache vhost such as <VirtualHost localhost:10006> DocumentRoot "/home/pate/***/git/kohana_site/public/site/" </VirtualHost> <VirtualHost *:10006> ServerName api.* DocumentRoot "/home/pate/***/git/kohana_site/public/api/" LogLevel debug </VirtualHost> If i point to localhost:10006 I get my website and api.localhost:10006 I get my api. Then I have haproxy setup on top of that, that runs on port 10010 and both localhost:10010 and api.localhost:10010 have the expected behaviour. Now I have nginx setup on port 80 with this configuration. server { listen 10000; server_name api.*; location / { proxy_pass http://legacy_server; } } server { listen 10000 default; server_name _; location /nginx_status { stub_status on; access_log off; } # images are accessed via the CDN over HTTP (not https) location /n/image { proxy_pass http://image_caching_server; } location / { return 301 https://$host:10014$request_uri; } } upstream legacy_server { server localhost:10010 fail_timeout=0; } the problem is that apache does not recognize the vhost properly and redirects api.localhost to the website instead of the api. I tried playing with set_proxy_header Host $host but it doesn't seem to do anything.

    Read the article

  • Cronjob not Running

    - by Pete Herbert Penito
    I have a bash script that looks like this: #!/bin/sh PID=`ps faux | grep libt | awk 'NR==2{print $2}'` STATUS=`ps faux | grep libt | awk 'NR==2{print $1}'` if [ "$STATUS" = "ec2-user" ]; then echo "libt already killed" else sudo kill $PID echo "libt was killed" fi sleep 5 cd /home/ec2-user/libt sudo ./libt I have saved this file as restart.sh and when I run it like ./restart.sh, it does what its supposed to (kills the libt process and restarts it). However, now I am trying to automate the process by using cron. So I made a cron job that I want to run every 6 hours that looks like this 0 */6 * * * /home/ec2-user/restart.sh When I run "crontab -l" I can see this print so I know it's been added properly. I should mention that the service does not have the ability to be restarted, (like "service ... restart") the process ID needs to be found, killed and then the start script needs to be ran. I have found that this cronjob is not working, I'll log onto the box and I can tell by looking at the logs that no restart has occurred. What am I doing wrong? What can I do to troubleshoot? Any advice would help, this is my first cron job :) Thanks!

    Read the article

  • Windows VPN always disconnects after < 3 minutes, only from my network

    - by hemp
    First, this problem has existed for almost two years. Until serverfault was born, I pretty much gave up on solving it - but now, hope is reborn! I've set up a Windows 2003 server as a domain controller and VPN server at a remote office. I am able to connect to and work over the VPN from every windows client I've tried, including XP, Vista, and Windows 7 without issue, from at least five different networks (corporate and home, domain and non.) It works fine from all of them. However, whenever I connect from clients on my home network, the connection drops (silently) after 3 minutes or less. After a short while, it will eventually tell me the connection has dropped and attempt to redial/reconnect (if I've configured the client that way.) If I reconnect, the connection will re-establish and appear to work correctly, but again will silently drop, this time after a seemingly shorter time period. These are not intermittent drops. It happens every single time, in exactly the same way. The only variable is how long the connection survives. It doesn't matter what type of traffic I send. I can sit idle, send continuous pings, RDP, transfer files, all of that at once - it makes no difference. The result is always the same. Connected for a few minutes, then silent death. Since I doubt anyone has experienced this exact situation, what steps can I take to troubleshoot my evanescing VPN?

    Read the article

  • Apache and MySQL not working well after extending filesystem

    - by xtrimsky
    I had 4Gb on my /var (/dev/mapper/vg00-var) filesystem, and I wanted to extend it to 160Gb. I did it following this tutorial: http://faq.1and1.com/dedicated_servers/root_server/linux_admin_help/7.html Now I have 160: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 424M 3.6G 11% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 198G 6.5G 192G 4% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 0 1.1G 0% /tmp Now I have a problem, in order for Apache to work, each time I reboot, I need to also reboot apache: "apachectl -k restart" which is already terrible. I think this is because /var contains the htdocs The worst part is, mysql is not starting at all. Mysql has some files also in /var What have I done wrong ?? :( Thank you EDIT: Attaching /var/log/mysqld.log: 120602 11:17:44 InnoDB: Waiting for the background threads to start 120602 11:17:45 InnoDB: 1.1.8 started; log sequence number 8354009 120602 11:17:45 [ERROR] /usr/libexec/mysqld: unknown variable 'set-variable=local-infile=0' 120602 11:17:45 [ERROR] Aborting 120602 11:17:45 InnoDB: Starting shutdown... 120602 11:17:46 InnoDB: Shutdown completed; log sequence number 8354009 120602 11:17:46 [Note] /usr/libexec/mysqld: Shutdown complete 120602 11:17:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

    Read the article

  • Jailkit not locking down SFTP, working for SSH

    - by doublesharp
    I installed jailkit on my CentOS 5.8 server, and configured it according to the online guides that I found. These are the commands that were executed as root: mkdir /var/jail jk_init -j /var/jail extshellplusnet jk_init -j /var/jail sftp adduser testuser; passwd testuser jk_jailuser -j /var/jail testuser I then edited /var/jail/etc/passwd to change the login shell for testuser to be /bin/bash to give them access to a full bash shell via SSH. Next I edited /var/jail/etc/jailkit/jk_lsh.ini to look like the following (not sure if this is correct) [testuser] paths= /usr/bin, /usr/lib/ executables= /usr/bin/scp, /usr/lib/openssh/sftp-server, /usr/bin/sftp The testuser is able to connect via SSH and is limited to only view the chroot jail directory, and is also able to log in via SFTP, however the entire file system is visible and can be traversed. SSH Output: > ssh testuser@server Password: Last login: Sat Oct 20 03:26:19 2012 from x.x.x.x bash-3.2$ pwd /home/testuser SFTP Output: > sftp testuser@server Password: Connected to server. sftp> pwd Remote working directory: /var/jail/home/testuser What can be done to lock down SFTP access to the jail? FWIW, I mostly used this as a guide: http://digitalpatch.blogspot.com.ar/2010/03/openssh-daemon-hardening-part-3-setup.html

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Error on LDAP Login - xsessions error - Session lasted less than 10 seconds

    - by Draineh
    I have two machines both running CentOS 5.6 64bit. On the LDAP Machine it has a DHCP, BIND and OpenLDAP Server. LDAP is correctly configured and users can authenticate against it. Using root I configure machine 2 to use LDAP for authentication and when trying to log in it successfully authenticates against a saved user on the LDAP Server but produces the following errors and then throws me back to the login screen. I can still sign in as root and use the machine as normal. The syslog doesn't show any errors and I disabled SELinux to see if it was interfering. The error; Your session only lasted less than 10 seconds. If you have not lgoged out yourself, this could mean that there is some installation problem or that you may be out of diskspace. Try logging in with one of the failsafe sessions to see if you can fix this problem. There is then a tickbox to view the contents of ~/.xsessions-errors which contains; /etc/gdm/PreSession/Default: Registering your session with utmp /etc/gdm/PreSession/Default: running: /usr/bin/sessreg -a -u /var/run/utmp -x "/var/gdm:0:Xservers" -h "" -l ":0" "admin" localuser:admin being added to access control list No profile for user 'admin' found /bin/sh: /usr/bin/dbus-launch --exit-with-session /etc/X11/Xinit/Xclients: No such file or directory /bin/sh: line 0: exec: /usr/bin/dbus-launch --exit-with-session /etc/X11/xinit/Xclients: cannot execute: No such file or directory Apologies if someone notices something isn't spelt quite right or doesn't sound right, the system never actually creates or saves this file so I have had to type it across from the screen. Through the authentication panel in CentOS on the client I have set it to create the users home directory on login. The user is being correctly authenticated and the /home/admin folder has been created but this error would suggest it has not? The client is a new install on an 80gb hard drive so there is well over 80% of the drive still available. Thanks for any suggestions or pointers.

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • Old scheduled task still being started, but can't find it.

    - by JvO
    System: Windows XP Home Summary: Some scheduled task is still being started by Windows, but I can't find it, nor determine where its configuration has been stored. This is turning into a mystery for me... I set up a Windows XP Home machine to run a task at 7:00 AM, using the Task manager. This was a clean install, no users defined, so you got straight to the desktop after starting the machine. The filesystem uses NTFS. Later on, I needed to introduce users, so I created one (named Sam) with administrator privileges. After this I noticed that the scheduled task failed, most likely due to privilege errors (i.e. can't write to a network drive). So I want to delete the old task, and add it again with the correct user credentials. However.... I can't find the old task!! I know it is still being executed at 7:00 AM, but there's no mention anywhere on the system of this task. I've looked in c:\windows\tasks for .job files, but there's only the "MP Scheduled Scan.job" from Security Essentials. I've searched the whole disk for mention of the batch file that is being run, but can't find it. So why is this old task still running, and more importantly, why can't I find it? Would it have something to do with introducing users on XP?

    Read the article

  • Family server setup

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Family server setup [closed]

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Proper end of day sequence to maintain monitor config

    - by WarmBeer
    I've got an HP EliteBook 6930p that travels from home, where it is connected to individual cables, and work where there is a docking station. At both locations I have an external monitor as the secondary monitor and like to have the laptop screen as the primary, i.e. with the task bar. At the end of the day I close the laptop, which is supposed to set it to standby. When I get home I plug in the power cord and the external monitor cord and open the computer. When heading into work I close the computer and unplug everything. Inevitably when I open the computer at the new location the monitors are reversed, i.e. the primary, task bar display is on the external monitor and the laptop shows the secondary, even though when i click identify the laptop has the 1. I then have to disable the secondary display, switch the primary to the laptop and re-enable the secondary. I've tried locking the computer before closing and occasionally that works to keep the setup in place but not always. Any suggestions for how to keep the config in place during transport? ed

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • proftpd initial directory for each user

    - by Dels
    After successfully setting up proftpd server, i want to add initial directory for each users, i have 2 user, webadmin that can access all folder and upload that can only access upload folder ... # Added config DefaultRoot ~ RequireValidShell off AuthUserFile /etc/proftpd/passwd # VALID LOGINS <Limit LOGIN> AllowUser webadmin, upload DenyALL </Limit> <Directory /home/webadmin> <Limit ALL> DenyAll </Limit> <Limit DIRS READ WRITE> AllowUser webadmin </Limit> </Directory> <Directory /home/webadmin/upload> <Limit ALL> DenyAll </Limit> <Limit DIRS READ WRITE> AllowUser upload </Limit> </Directory> All set ok, but i need to tell my ftp client initial directory for each user (otherwise it keep fail to retrieve directory), which i think it should be automatically set for each user (no need to type initial directory in ftp client)

    Read the article

  • Symbolic link modification for HP unix

    - by kalpesh
    Hi David Zaslavsky, Recently i was working on modifying the Symbolic links ... for a particular files.. while searching on internet i saw your post ... I am trying to use this script which you had posted .. find /home/user/public_html/qa/ -type l \ -lname '/home/user/public_html/dev/*' -printf \ 'ln -nsf $(readlink %p|sed s/dev/qa/) $(echo %p|sed s/dev/qa/)\n'\ script.sh SO i tried to modify your script for my problem .. in Hp unix env.. but it seems that the -lname command does not works for HP unix. do you know something equivalent that i can use ... Just to give you and idea of my problem ... I want to change all the symbolic links inside a particular folder .. New Symbolic link -- /base/testusr/scripts Old Symbolic link -- /base/produsr/scripts Now folder "A" contains more than 100 different files having soft links which points to this path -- /base/produsr/scripts But what I want is that the files inside folder A to point to this soft link --/base/testusr/scripts I am trying to achieve in Hp unix ... would really appreciate your help on this ...

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Gigabyte H55N-USB3: No video on HDMI

    - by newt
    I built a new PC with a Gigabyte H55N-USB3 / Intel Core i5 650. With a monitor plugged in the DVI port, everything works fine. I installed Windows 7 32-bit and enabled remote desktop connection. After that, I unplugged the monitor, plugged it into network and installed everything else (drivers, programs, etc) via RDP. However, when I try to use the HDMI port on my TV nothing appears. Neither during the boot, neither after Windows starts. The TV says there's "no signal" (if I remove the cable the message changes to "check cable"). The cable is new, and it is working fine with my home theater on same TV (by the way, it is the cable which came bundled with the home theater). Video driver is the latest from Intel site. Anyway, this shouldn't be the problem since there is no image during the boot. Any ideas or tips would be welcome. I'm googling around but found nothing useful, yet.

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to [email protected] I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • Mac OS X - Time Machine backup fails verification - What can I do to save the history?

    - by usermac75
    Hi, How do I make Time Machine to make a new complete backup without losing older versions of backed up files? Verbose: I am using the Time Machine backup on my OS X (Snow Leopard) to backup the whole computer to an external drive. I especially like the "history", i.e. the feature that allows you to restore the older version of a file. Problem: I had some data corruption on my external backup drive, I repaired it with the System Tool for doing that, it found some faults. I had the disk tool repair the external drive. After that, the external drive was OK and I could use Time Machine again. I let Time Machine do one more backup. Now I made a verification according to http://superuser.com/questions/47628/verifying-time-machine-backups, namely along sudo diff -qr . $HOME/Desktop 2>&1 | tee $HOME/timemachine-diff.log However: After doing the command above, several differences and missing files were reported, approx. 200 files in sum. Whereas some of the missing files were cache or excluded directories, the differences do bother me, especially as some important documents from me are listed as differing. How can I make sure that the data on the external drive is synced correctly? Is it possible to have Time Machine to do a complete new backup without losing the history? Or to have Time Machine compare all files for differences and re-write all files that are different? Or can I set some flags on the files that do not match to have them copied again? (like the archive-flag in Windows/Dos). I'd rather not touch the files because I would like to keep the date of last change/date of creation) Thank you for your thoughts!

    Read the article

  • Virtual host doesn't read .htaccess

    - by Charlie
    I just created virtual host: <VirtualHost myvirtualhost:80> ServerAdmin webmaster@myvirtualhost ServerName myvirtualhost DocumentRoot /home/myname/sites/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/myname/sites/public_html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> It works, but it cant read .htacces file in public_html: DirectoryIndex otherindex.php I tried change all AllowOverride to All, but I get 500 error. How can I fix this ? thanks.

    Read the article

  • Connect USB hard drive to wireless router on RJ45 port? Possible?

    - by lawphotog
    just a quick story behind. I was trying to set up wireless networked hard drive at home. My wireless router doesn't take USB. I am considering few options. First i was considering to get something like WD My Cloud. My router is an old one provided by service provider. It only has 10/100 Ethernet. WD My Cloud has Gigabit interface. So unless i changed a new router, data transfer will be slow. So upgrading the router is a must if i want fast transfer speed. Plus I already own an external hard drive with USB 3.0 interface. So if I get a router like Netgear D6300, i can get a decent speed wireless shared drive at home. And i can use my existing HDD instead of WD My Cloud. But the router isn't cheap so I am saving up for that. In the meantime I found out the existence of USB to RJ45 adaptor. I read the reviews and some say it works for them and for some don't. They didn't really say what they were trying to do so I'm confused. So if i bought an adaptor like this, can i connect my existing HDD (USB) with my existing router (RJ45) and use it as a shared drive for data transfer? I know it will be slow as the adaptor will only have USB 2.0 and 10/100 for Ethernet. But it's fine as it's for temporary until i got my new router.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >