Search Results

Search found 20566 results on 823 pages for 'folder structure'.

Page 723/823 | < Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • Spotlight has stopped indexing/returning anything in /Applications

    - by pra
    After a recent kernel panic & restart, Spotlight no longer seems to know anything about the files under my /Applications folder. I used to launch Safari.app, Opera.app, Textedit.app, etc via Spotlight as a matter of routine. Now, I get "No results found" for all of them (except Textedit.app, which launches a demo text editor from a Qt installation). The programs are still there & still launch directly from Finder. I've already run disk utility & verified the disk, no issues. I repaired disk permissions, which made several changes, but to no effect. Is there anything else I can do, short of re-installing MacOS? Update: I already verified that "Applications" was still checked in my Spotlight preferences. It was still returning applications located elsewhere (the Qt textedit sample app), so that shouldn't have been the issue. A few hours later it resolved itself; I guess there's a background process running on some interval.

    Read the article

  • nginx 301 redirect to subfolder on primary domain

    - by 187j3x1
    sorry for my poor english. i just set up wordpress on my vps, so far its the only item on my site. there for seo reason, i think is better redirect all primary domain to the blog folder. primary domain is example.com wordpress is at example.com/blog what i want is rewrite www.example.com and example.com to example.com/blog. googled got some scripts, and make some change paste into nginx config file. here is: #301 redirect www to non-www server { server_name www.example.com; location = / { rewrite ^/(.*) http://example.com/$1 permanent; } } #301 non-www to subfolder server { server_name example.com; location = / { rewrite ^/(.*) http://example.com/blog$1 permanent; } } it works at some degree, successfully redirect to example.com/blog. the only problem is i get 404 not found error. then i only make nginx redirect www to example.com/blog. ok, this time i can access blog page. i know there is something wrong in the non-www to subfolder script. but do not how to fix it :(

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • What's the best / easiest way to combine two mailboxes on Exchange 2007?

    - by jmassey
    I've found this and this(2) (sorry, maximum hyperlink limit for new users is 1, apparently), but they both seem targeted toward much more complex cases than what I'm trying to do, and I just want to make sure I'm not missing some better approach. Here's the scenario: 'Alice' has retired. 'Bob' has taken over Alice's position. Bob was already with the organization in a different but related position, and so they already have their own Exchange account with mail, calendars, etc., that they need to keep. I need to get all of Alice's old mail, calendar entries, etc., merged into Bob's existing stuff. Ideally, I don't want to have all of Alice's stuff in a separate 'recovery' folder that Bob would have to switch back and forth between to look at older stuff; I want it all just merged into Bob's current Inbox / Calendar. I'm assuming (read: hoping) that there's a better way to do this than fiddling with permissions and exporting to and then importing from a .pst. Office version is 2007 for everybody that uses Exchange, if that helps. Exchange is version 8.1. What (preferably step-by-step - I'm new to Exchange) is the best way to do this? I can't imagine this is an uncommon scenario, but my google-fu has failed me; there seems to be nothing on this subject that isn't geared towards far more complex scenarios. (2): h t t p://technet.microsoft.com/en-us/library/bb201751%28EXCHG.80%29.aspx

    Read the article

  • After adding skip-innodb mysql doesn't start

    - by Pentium10
    I am trying to setup these values: #skip-bdb #skip-locking #skip-innodb When I add them to /etc/mysql/my.cnf and even if I turn ON of of, them after I do the service restart mysql fails to start, and no error message printed. sudo service mysql restart [ ok ] Stopping MySQL database server: mysqld. [FAIL] Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! Previously I made sure that I have no InnoDB tables, and all files of that type were removed. I tried looking for error files but I couldn't locate it: /var/log/mysql.err is a 0 byte file /var/log/mysql folder has no files rsyslog was changed in past with inetutils-syslogd, and this might have changed the log files, and it could be the reason why I don't see any error logs, and I am stuck how to look or go forward.

    Read the article

  • NTFS 'Owner' missing when accessing hard disk from external USB adapter

    - by trismarck
    I have a hard drive with Windows XP SP3 installed on it. When the drive is connected through the standard SATA connector inside the laptop, everything works as expected. However when I remove the drive from the laptop and connect the drive to the external USB adapter, almost all files / folders lose the 'Owner' field contents. I was wondering why could that be. I've tried two USB adapters and this happens on each. I could take the ownership of all of the files, but this would overwrite the Owner value (the Owner value that is present when the drive is accessed through standard SATA connector in the laptop). //edit: if the hard drive is used through the USB adapter, I can't access most of the files, at least until I take ownership of the files (/folders). This is how it looks like: HDD inside USB adapter: HDD inside laptop: (note the Owner column) //edit: some of the files on the first screenshot have Owner field filled up. That's because I took the ownership of those files / folders to be able to access the files on the hard drive. //edit2: also, if the hard drive is connected through USB adapter and if I've took the ownership of some files by the 'ddd' user, then if i login as a different user (lets say 'eee' user), the owner field is _still_ empty: ddd user: eee user: eee user can't access the 'ddd' folder. Both users have Administrator priviledges.

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • Windows Server 2008 R2 loses ability to connect to network share

    - by JamesB
    I could sure use some help with this one: I've got two Windows Server 2008 R2 x64 Terminal Servers, as well as several 2003 servers (DNS / Wins / AD / DC). On the two 2008 boxes, every now and then they will get in this mode where you can't map a drive to a random server. I say random server because it's not always the same server that you can't map to. Here is a summary of what I can and can't do: net view \\servername Sometimes this works, sometimes it does not. net view \\FQDN This always works. net view \\IPAddress This always works. ping servername Sometimes this works, sometimes it does not. ping FQDN This always works. ping IPAddress This always works. I've been looking all over for a solution to this. It sure seems like Microsoft would have a hotfix by now. The kicker to this is that it sometimes works great, especially after a reboot. It may run for 2 weeks just fine, but all of a sudden it will fail to resolve the remote server name. It will then be this way for a few days, then it might start working again. Also, while it's in the mode of not working, the other servers have no problem getting there. It's just these 2008 R2 Terminal Servers. Setting a static entry in the Hosts file and LMHosts does not make it work. All servers have static IPs and they are registered in DNS and Wins just fine. Here is a long thread on MS Technet of the exact same problem, but they don't have a good solution. Here is their workaround (It was from June of 2010): Good news - a hotfix is in the works and a workaround has been identified: Root cause is that since this is SMB1 all user sessions are on a single TCP connection to the remote server. The first user to initiate a connection to the remote SMB server has their logon-ID added to the structure defining the connection. If that user logs off all subsequent uses of that TCP session fail as the logon-id is no longer valid. As a workaround for now to keep the issue from happening you will want to have the user not logoff the Terminal Server only disconnect their sessions. Any word from anyone out there about a solution? Any help would sure be appreciated. Thanks, James

    Read the article

  • Upgraded users to Win7. Now getting "path not found" when saving files or opening attachments

    - by Matt Penner
    We have a Server 2008 AD environment with about 5k users. We just rolled out Windows 7 SP1 (were XP) with great success. However, about once a day we get a few calls that a user opens a file from their Documents (the folder is on the server and redirected), edits it and attempts to save but Win7 reports that the path is not found either because it doesn't exist or no permissions. The only way to fix it is to delete the profile. In addition we get about the same number but different users saying that they cannot open attachments from Outlook 2010 due to no permission. We have to edit the temp Outlook storage path in the registry to fix it (or delete the profile). I think the two issues may be related. What scares us is that we rolled out 1 month ago and had no calls of this nature until about 2 weeks ago. It started off as one or two but seems to be growing. Any ideas? We're going to open a Microsoft ticket but I wanted to seenif anyone else has run into this. Thanks!

    Read the article

  • Moving web files to /home/user/ gives permission denied using apache

    - by Maaz
    I recently created some linux users on my machine and their respective directories were created in the following manner /home/my_user so I decided to treat each user as one of my websites. I moved all my website files over to this directory like so /home/my_user/public_html/. I edited the virtual host in my httpd.conf and changed the root directory folder so this is how that looks <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/home/my_user/public_html" ServerName mywebsite.com ServerAlias www.mywebsite.com ErrorLog "/var/log/httpd/mywebsite/error_log" CustomLog "/var/log/httpd/mywebsite/access_log" common </VirtualHost> Now this virtual host configuration was working perfectly fine with my older document root path that was located at /var/www/html/mywebsite/public_html but after changing that to what it is right now, I am getting a permission denied error. But I followed the instructions here: http://stackoverflow.com/questions/14427808/you-dont-have-permission-error-in-apache-in-centos Even after following the above instructions, when I run the following command: sudo -u apache ls /home/my_user/public_html The server responds with ls: cannot open directory /home/my_user/public_html: Permission denied Even so, I do not get a permissions denied error when I try to access my site any more, however, now I am redirected to the default page of apache instead of my website. I am not exactly sure what's wrong any more, if anyone has an idea, it would be great if you guys could help out!

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Is there any proper documentation for mod-evasive?

    - by Question Overflow
    mod_evasive20 is one of the loaded modules on my httpd server. I read good things about how it can stop a DOS attack and wanted to try it out on my localhost. A search for mod_evasive turns up a blog post by the author which briefly describes what it does. Other than that, I can't seem to find a reference or a documentation on the apache modules site. I was wondering whether it is a module recognised by Apache since there is no mention of it on its website. I have a mod_evasive.conf file sitting in the /etc/http/conf.d folder that contains the following lines: LoadModule evasive20_module modules/mod_evasive20.so <IfModule mod_evasive20.c> DOSHashTableSize 3097 DOSPageCount 2 DOSSiteCount 50 DOSPageInterval 1 DOSSiteInterval 1 DOSBlockingPeriod 10 </IfModule> My understanding from the setting is that if I were to click refresh or send a form more than two times in a one second interval, apache will issue a 403 error and bar me from the site for 10 seconds. But that is not happening on my localhost. And I would like to know the reason. Thanks.

    Read the article

  • 3 simple questions about file permissions

    - by Camran
    1- Wonder, is this a good setup of permissions in the /var directory? drwxr-xr-x 2 root root 4096 2010-05-30 03:34 backups drwxr-xr-x 7 root root 4096 2010-05-29 17:55 cache drwxr-xr-x 29 root root 4096 2010-05-29 17:55 lib drwxrwsr-x 2 root staff 4096 2009-07-14 04:36 local drwxrwxrwt 3 root root 60 2010-06-02 03:34 lock drwxr-xr-x 9 root root 4096 2010-06-02 03:34 log drwxrwsr-x 2 root man 4096 2009-09-20 20:36 mail drwxr-xr-x 2 root root 4096 2009-09-20 20:36 opt drwxrwxrwt 12 root root 420 2010-06-02 12:12 run drwxr-xr-x 4 root root 4096 2009-09-20 20:37 spool drwxrwxrwt 2 root root 4096 2009-07-14 04:36 tmp drwxr-xr-x 14 user root 4096 2010-05-30 22:21 www 2- Could you give me a brief explanation of the columns above? First one is which permissions they have. Second is a nr. Third and fourth says "root root" for example. fifth is another nr (4096 for example). and the others are obvious. 3- Could you give me a brief explanation of the folders above? Especially the "lock" and "tmp" folders. Lock contains an apache2 folder which seems empty. Thanks

    Read the article

  • Windows 7 can't find Ubuntu computer by hostname

    - by endolith
    I got a new Windows 7 machine, and was using VNC, SSH etc to connect to my Ubuntu machine, and it worked fine previously connecting to the Ubuntu computer's hostname. Now it doesn't work if I use the machine's hostname, but it does if I use the local IP or DynDNS name. I can also access it from my Android phone using the local hostname over SSH. If I try to connect with SSH to the hostname, it says "Host does not exist". VNC says "Failed to get server address". NX says "no address associated with name", and I don't see it in Windows' "Network" folder. I've rebooted everything. I've turned off Windows firewall. It was working fine a few days ago, but now it's not. How do I figure out what's blocking it? Aha: It probably has something to do with Samba. I reset the Samba configuration the other day, and apparently this can affect it. http://ubuntu-virginia.ubuntuforums.org/showthread.php?t=1558925 I tried commenting out "encrypt passwords = No" as described there, but it still doesn't work.

    Read the article

  • Windows 7: L10N mechanics

    - by John Sonderson
    I have a localized version of Windows 7. I can't figure out where windows gets the names for files and directories on the system. For instance, consider the following (default) files. > cd C:\Users\Public\Pictures\Sample Pictures > dir Chrysanthemum.jpg Desert.jpg ... When I view these files on the default file explorer I see these names: Crisantemo.jpg Deserto.jpg ... This seems to imply that each file can be somehow assigned a localized name somewhere. However I cannot figure out how. Would appreciate if someone could shed some light on this issue. Thanks. UPDATE EDIT: The desktop.ini file in the folder containing Chrysanthemum.jpg contains the following entries. The .dll files used to translate the various resources are unfortunately not human-readable and I have no clue as to how they could be generated for other files created by the user to be translated, but they serve the purpose, and solve the mystery which lead to the post. Thanks. [LocalizedFileNames] Chrysanthemum.jpg=@%systemroot%\system32\SampleRes.dll,-101 Desert.jpg=@%systemroot%\system32\SampleRes.dll,-102 Hydrangeas.jpg=@%systemroot%\system32\SampleRes.dll,-103 Jellyfish.jpg=@%systemroot%\system32\SampleRes.dll,-104 Koala.jpg=@%systemroot%\system32\SampleRes.dll,-105 Tulips.jpg=@%systemroot%\system32\SampleRes.dll,-106 Lighthouse.jpg=@%systemroot%\system32\SampleRes.dll,-107 Penguins.jpg=@%systemroot%\system32\SampleRes.dll,-108 [.ShellClassInfo] LocalizedResourceName=@%SystemRoot%\system32\shell32.dll,-21805

    Read the article

  • MacBook Pro with OSX 10.6.3 (Snow Leopard) Wi-Fi network connection breaks after few minutes

    - by Yanick Landry
    I have a MacBook Pro with OSX 10.6.3 (Snow Leopard). After connecting on a Wi-Fi network, the connection "breaks" after a few minutes. What I mean by "breaking" is that all requests, whether it is loading a web page, connecting to a share folder, connecting to my local router at 192.168.0.1, or pinging anything doesn't get through (time out). When in a "break" situation, I can see in the Network Settings panel that I still have an active IP, which I can successfully ping. I have this problem at home with a router D-Link DI-624 and at work with a D-Link WBR-2310, all with updated firmwares. I thought DHCP was the issue. So I tried assigning a fixed IP address (192.168.0.166). It successfully connects, but after a few minutes, the connection still breaks. The solution I'm currently using is that I disable the AirPort (on the Network icon menu in the top bar), wait a few seconds then re-enable it. It then quickly works, but the connection still breaks after a few minutes. I tried Googling my problem but I think I can't find any good keywords ! It's my first question here, so sorry if I don't respect some rules.

    Read the article

  • Issue with exim4u

    - by bretterer
    I am using exim4u for a mail server on debian. Everything has been working fine until recently. I have not done anything to the server from the time it was working until now. I have a domain set up and is receiving and sending mail correctly. When i put a forwarding address in to a gmail address, I can still receive and send email from my webmail client but it never makes it to gmail. I have check logs and this is what I have found 2012-04-01 18:47:04 1SEPns-0000aN-Br DKIM: d=gmail.com s=20120113 c=relaxed/relaxed a=rsa-sha256 [verification succeeded] 2012-04-01 18:47:10 1SEPns-0000aN-Br H=mail-bk0-f43.google.com [209.85.214.43] Warning: X-Spam_score: -0.3 2012-04-01 18:47:10 1SEPns-0000aN-Br <= [email protected] H=mail-bk0-f43.google.com [209.85.214.43] P=esmtps X=TLS1.0:RSA_ARCFOUR_MD5:16 S=3424 id=CAGZkSKbYc7SJR+yXTgG8ubQvx4PNb0CwHG1DDKGeZ-qFiA$ 2012-04-01 18:47:11 1SEPns-0000aN-Br => /home/mail/mydomain.com/support/Maildir ([email protected]) <[email protected]> R=virtual_domains T=virtual_delivery 2012-04-01 18:47:12 1SEPns-0000aN-Br => [email protected] <[email protected]> R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [209.85.225.27] X=TLS1.0:RSA_ARCFOUR_SHA1:16 2012-04-01 18:47:12 1SEPns-0000aN-Br Completed I am not a mail server person so im not sure what everything here is saying. It appears to me that it is successfully sending mail to gmail though. I have checked my spam folder as well and nothing there either. If it would help to have some more information from my server, let me know because Im not sure what would be of help here.

    Read the article

  • Issues Deploying Functional WAR to Elastic Beanstalk with Tomcat7

    - by BFar
    I am currently deploying OpenTripPlanner (http://github.com/OpenPlans/OpenTripPlanner.git) to Elastic Beanstalk. I'm able to successfully build and deploy opentripplanner with my own customized settings on an ec2. I have set it up so that the appropriate WAR file can be placed in the Tomcat/Webapps folder, and when Tomcat is started up, it will auto-deploy, and even download open trip planner's graph.obj from an S3. All of that works just fine, except when I try to deploy to Elastic Beanstalk. When I upload to Elastic Beanstalk, the log shows that my WAR file is successfully unpacked & successfully downloads the graph.obj from my S3. The only difference is that then nothing happens and I can't load the site in my browser. The health is RED, and I can't figure out what is going on. I've tried looking into ports and dns issues, but I can't determine what's wrong. Anyone have any ideas? Why would a WAR that works on tomcat7 outside of Beanstalk fail to be accessible?

    Read the article

  • Directories shown as files, when sharing a mounted cifs drive

    - by Johan Sigfred Abildskov
    I have an issue where a directory is shown as a file when accessing a samba share ( on Ubuntu 12.10 ) from a Windows machine. The output from ls -ll in the folder on the linuxbox is as follows: chubby@chubby:/media/blackhole/_Arkiv$ ls -ll total 0 drwxrwxrwx 0 jv users 0 Jun 18 2012 _20 drwxrwxrwx 0 jv users 0 Apr 17 2012 _2006 drwxrwxrwx 0 jv users 0 Apr 17 2012 _2007 drwxrwxrwx 0 jv users 0 May 12 2011 _2008 drwxrwxrwx 0 jv users 0 Feb 19 09:53 _2009 drwxrwxrwx 0 jv users 0 Dec 20 2011 _2010 drwxrwxrwx 0 jv users 0 May 8 2012 _2011 drwxrwxrwx 0 jv users 0 Mar 5 11:37 _2012 drwxrwxrwx 0 jv users 0 Feb 28 10:09 _2013 drwxrwxrwx 0 jv users 0 Feb 28 11:18 _Mailarkiv drwxrwxrwx 0 jv users 0 Jan 3 2011 _Praktikanter The entry in /etc/fstab is: # Mounting blackhole //192.168.0.50/kunder/ /media/blackhole cifs uid=jv,gid=users,credentials=/home/chubby/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 When I access the share directly from the NAS on my Windows box, there are no issues. The version of Samba is 3.6.6, but I couldn't find anything in the changelogs that seem relevant. I've tried mounting it in different locations with different permissions, users and groups but I have not made any progress Due to my low reputation on serverfault ( mostly stackoverflow user ) I'm unable to post a screenshot that shows that the directories are shown as files. If I type the full path in explorer, the directory listing works excellently, except for any subdirectories that are then shown as files. Any attack vector for this issue would be greatly appreciated. Please let me know if I have provided insufficient details. Edit: The same share when accessed from a OS X, works perfectly listing the directories as directories. Best Regards!

    Read the article

  • Sharing between Vista and Windows 7

    - by Metro Smurf
    Vista Ultimate 32 bit Windows 7 Ultimate 64 bit I've read through similar questions about sharing between Win7 and Vista, but none of them have resolved my issue of not being able to share between Win7 and Vista: Connecting to a Vista shared folder from Windows 7 Networking Windows 7 and Vista Enable File sharing in Windows Vista Previously I had previously had my Vista and XP system sharing back and forth without any problems. I was able to access the shares without entering a user name / password in the NT challenge prompt (note: account names and passwords were different on the Vista and XP systems). Currently I replaced my XP system with a Win7 system. Now, when I attempt to access shares to/from Vista / Win7, I am continually prompted with an NT challenge to enter my credentials. Things I've Verified/Tried Both systems are on the same workgroup. Win 7 is using the Home network. Vista is using the Private network. In other words, neither system is using a Public network profile. Enabled file sharing with and without password protection on both Vista and Win7 Tried HomeGroup Connections (win7) with Windows to manage connections and Use user accounts to connect. Reviewed too many online articles to count to trouble shoot. Set the shares to have full control by everyone. Set up the shares directly on the directory and through the share manager. My Question How can I enable file sharing between Vista and Win7 without being prompted with a username/password challenge, ever?

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • Installing sphinx on a web hosting server

    - by fiftyeight
    I want to install sphinx search on a web-hosting server. I'm on a Linux VPS with hostgator, but I never actually installed anything on a remote server so this will be a first time for me. If there's anyone here who installed sphinx it's really help me I had some problems when using sphinx on my PC with the permissions and the MySQL files, eventually I got it working on my PC. Anyway, I'd me really grateful if anyone can help me with some questions Do i need root access to install sphinx? I have root access to the server but I'd connect to it as a normal user since doing stuff as root is always less secure. can anyone tell me by what user I need to execute the indexer and the search daemon? Should I use root access in order to this? when I did it as a normal user on my PC it gave me some trouble with the PID file and the log files. last time I executed the search deamon I executed it as a normal user and it gave me some trouble, I created the folder /var/log/ for log files and did chmod 777 on it, but still when I executed the search daemon it created the PID file "searchd.pid" file but with no permissions for some reason, any idea why? Thanx in advance. fiftyeight

    Read the article

< Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >