Search Results

Search found 120660 results on 4827 pages for 'http server'.

Page 1304/4827 | < Previous Page | 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311  | Next Page >

  • Error when trying to access Shared files from iMac [closed]

    - by SatheeshJM
    I used to access all my Windows XP shared files on my Mac using Finder -- Window -- Connect to server. Now all of a sudden, an error crops up when I try to connect. I get the error "There was a problem connecting to the server "192.168.1.*" The server may not exist or it is unavailable at this time. Check the server name or IP address, check your internet connection and then try again. How can I remove this error and access my shared files from my Mac? P.S my network connections is fine.

    Read the article

  • Central Storage for windows user accounts homedirs .. hardware/software needed?

    - by mtkoan
    We have ~120+ users in our network, and are endeavoring to centralize logon authentication and home directory storage server-side. Most of the users are Windows 2000/XP machines, and a few running Mac OS X. Ideally the solution will be open-source-- can this all be managed from a Linux server running LDAP and Samba? Or would a hacked-NAS Box with a FreeNAS or similar suffice? Or is Micro$oft's Active Directory really the preference here. Is it viable to store PST files on this server for users to read from and write to? They are very large ~1.5gb. We have no mail server (or money) capable of Exchange or IMAP, only an old POP3. What kind of hardware horsepower and network architecture should we have for this kind of thing?

    Read the article

  • Postfix does not get external emails

    - by Marks
    Hi there. I have a small vserver(ubuntu), on which i want to run mailman and therefore have to configure postfix. I already configured postfix to send mail per relay over gmail (server-ext.). I also receive local emails (server-server), but i dont get emails from ext. (ext. - server). I have a domain routed to the vserver. But when i send an email from external (e.g. from gmail) to [email protected] i don't get it into the inbox. But there is also no error return message i get to my gmail account. What could be a possible problem? Do i have to configure email revieving from external somewhere? Thanks for any help.

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • JBoss basic access

    - by user101024
    I have JBoss 5 deployed on Solaris 10 - the servers connection has unrestricted high ports (1023) open to the internet. I can access the box via ssh & FTP from a second server on the same subnet and anywhere over the internet. JBoss is running over port 8080 and is accessible via http://locahost:8080 on the box itself. I cannot access it via http://ip.add.goes.here:8080 from either the other server on the same subnet or via the internet. Is there any service or configuration within JBoss or elsewhere on Solaris 10 that needs to be changed from default to allow http traffic to be served? Thanks, Kevin

    Read the article

  • How to specify Multiple Secure Webpages with .htaccess RewriteCond

    - by Patrick Ndille
    I have 3 pages that I want to make secure on my website using .htaccess -login.php -checkout.php -account.php I know how to make just one work page at a time using .htaccess RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /login.php RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] I and trying to figure out how to include the other 2 specific pages to make them also secure and used the expression below but it didn't work RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /login.php RewriteCond %{REQUEST_URI} /checkout.php RewriteCond %{REQUEST_URI} /account.php RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] Can someone help me the right expression that will work with multiple pages? The second part of the code is that, if https is already on and a user move to a page that Is not any of the pages i specified about, I want that it should get back to http. how should I write the statement for it to redirect back to http if its not any of the pages above? I have my statement like this but its not working RewriteCond %{HTTPS} on RewriteRule !(checkout|login|account|payment)\.php http://%{HTTP_HOST}%{REQUEST_URI} [L,R] Any thoughts?

    Read the article

  • Cannot ping a VM from a Hyper-V host

    - by user1688175
    I am facing a weird situation in my network environment. My infrastructure looks like this: I have a D-LINK DIR-635 acting as my default gateway (192.168.0.1) A physical Windows 2012 Server (192.168.0.10) with the following roles: DHCP, DNS, AD DS and Hyper-V. A virtual Windows 2012 Server (192.168.0.50) which I intent to use as an IIS server (Role is not deployed yet). My virtual machine was able to get an IP address from the DHCP server and is working perfectly (I can ping the default gateway [by IP, FQDN or DNS Alias], the Hyper-V host and any site on the Internet (CNN.com for example). However I cannot ping the VM from my host. It says: Request Timed Out. Do you guys know what I might be doing wrong? Any support is appreciated! Thanks!

    Read the article

  • Inbox lock for exclusive access [duplicate]

    - by user212051
    This question already has an answer here: Dovecot pop3: Disconnected for inactivity 2 answers -I found server logged into mailbox on my smtp server -This server released connection for inactivity after 10 minutes. -in the 10 minutes between logged in & disconnected for inactivity, 3 attempts to send message from 3 different clients to this mailbox failed due to unable to lock for exclusive access: Resource temporarily unavailable -after disconnection the 3 messages reached mailbox good. I tried to simulate the process and lock test mailbox but I couldn't, I was aiming to understand who can lock ? who has exclusive access ? and why only client server can lock ? and how to solve this ?

    Read the article

  • svn:externals cache and stale URLs

    - by dcaunt
    I have a subversion externals entry in a library folder which looks like this: Z https://svn/Z/trunk/library/Z Fetching external item into '/home/releases/50/library/Z' svn: OPTIONS of 'http://svn/repo/trunk/library/Z': could not connect to server (http://svn) The externals URL was the same, but over the HTTP protocol. Having changed the externals to point to the HTTPS, I can't figure out why subversion is still trying to use the old URL. Does subversion cache the externals path, and if so how can I clear this? If not, what else could be causing this? I can check out from the correct (HTTPS) URL fine from the server. NOTE: svn is an entry in the server's local hosts file, pointing to our subversion server's IP.

    Read the article

  • Remote Desktop Multi Monitor Connection

    - by user196039
    This question may be a bit unusual but I think it's an interesting angle: Why does a remote desktop connection to my server with all four of my 30 inch (@2560x1600) monitors work, even though the server doesn't have a graphics card installed? I guess the graphics are perhaps not really rendered on the server? What exactly happens there so that this works? Searching for this I mostly found "how to enable multi monitor support" but not an answer to my question yet. Any links/documentation I can read to understand this better? (To get all four monitors working well on my local machine the one graphics card I had wasn't even enough, so I have two graphics cards on the local machine now) The operating systems are Windows 7 home premium and it's a Windows Server 2008 R2.

    Read the article

  • htaccess with wildcard SSL

    - by Ericko
    We have a Wildcard SSL Certificate that is supposed to work on any subdomain of a given domain. So in this server we have this file structure: /home/DOMAIN/public_html/subdomainx /home/DOMAIN/public_html/subdomainy etc... Now, the Certificate is installed, but when you visit any subdomain over https (example: hxxps://subdomainx.domain.com ) it points to /home/DOMAIN/public_html/index.php We need that when you visit a subdomain via https hxxps://subdomainx.domain.com That it points to the the same directory that it's http equivalent: /home/DOMAIN/public_html/subdomainx Our provider tells us that this is not possible, that the current behaviour is correct, and that to achieve this we need to do it with htaccess. I've tried a few things, incluiding this solution, that seems to be what I need: http://stackoverflow.com/questions/5365612/advice-on-configuring-htaccess-file-to-redirect-http-subdomain-to-https-equival But can't get it to work. Any tips? Thanks. Added: The server is Apache.

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • nginx: handling 404 with error_page

    - by ytw
    Originally, I have something like this in the nginx.conf file. location ^~ /test_api { types { application/json json; } root /usr/local/www/data; rewrite "/test_api/(.*)" /api_response/test_api_$1.json break; error_page 404 /api_response/unknown_request.json; } When a requested resource is not found locally, unknown_request.json (default response) is returned correctly. Then I had to change the rewrite to point to a remote server as follows: rewrite "/test_api/(.*)" $scheme://www.somedomain.com/test_api_$1 break; It doesn't return unknown_request.json (default response) anymore even though the remote server returns a 404. Is there a way to continue to return unknown_request.json to the client when the remote server returns a 404 assuming the remote server can't be changed to return unknown_request.json? Thanks very much.

    Read the article

  • Network share permission issues

    - by JL
    I have an IIS server running a site, appPool is running under local system, this is done because its easier to have full permissions to certificates and other file based resources on the local server. Problem is when I try write or copy a file to a network share, permissions are obviously not in place on the remote system for the IIS server local system. Is it possible to grant permissions on the remote system to include read/write or even full access to the IIS servers local system account?

    Read the article

  • intermittent SSH with ssh_exchange_identification error

    - by rafamvc
    My ssh connection to my server works every 30 min for around 10 min. Things that I figure out that might be the problem: The server is underload (it is a database server), but on those spare moments that I can connect, it is still under the same load, which doesnt make sense. The server is a ubuntu, and the consolekit was using a lot of virtual memory. Restarted the consolekit and it seems to be using a right amount of memory now. It is not the host alows or deny. Those are setup properly. It is not a firewall problem. Those settings were working and the same settings are working for other similar machines. It is on the ec2. Amazon cloud.

    Read the article

  • subversion issue on mac os x

    - by user32942
    This exists in my httpd.conf file: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn Allow from all #AuthType Basic #AuthName "Subversion repository" #AuthUserFile /Users/iirp/Sites/svn-auth-file #Require valid-user </Location> This is working file When I change this to: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn #Allow from all AuthType Basic AuthName "Subversion repository" AuthUserFile /Users/iirp/Sites/svn-auth-file Require valid-user </Location> and when I access my repository through URL, it gives me the authentication screen but after that screen my svn repository is not showing up correctly. to see message that it gives to me is: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log.

    Read the article

  • SBS2008 : can't have a mailbox accessible from outside?

    - by Bertrand SCHITS
    We have a SBS2008 server with 9 users using the embeeded Exchange. The MX record points to this server. This works fine. We want 2 remote users to also have a mailbox on this server. The consultant say we can't because SBS don't allow to have remote users for Exchange. He may be right but seems very strange to me. I don't find anything related to that. I don't want to touch this server for political reasons. So I can't to the test. Can anyone confirm if Exchange on SBS2008 can or can't be reach from outside ?

    Read the article

  • Help - since adding an elastic load balancer to my EC2 web application I cannot connect with the MySQL database (not in AWS)

    - by undefined
    I have a web application that uses an EC2 instance to receive uploaded images, resize and store on S3 and update my MySQL database with the image record. This database is hosted outside Amazon Web Services and so obviously involves communication between the EC2 instance and the database. Images are posted to the upload server from a Flash client which receives the IP address of the upload server when it is loaded and so sends images to 1.12.23.34/resize_script.php This has worked great .. until i started to try and include a load balancer. Since the ELBs do not use an IP address but a DNS address I am now passing this to Flash. Now when I upload images I get the following response from the server - Could not connect to MySQL: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 What might be causing the lost connection to MySQL server. Is there any additional steps I need to take to allow my upload servers to be load balanced? I have set the host property of my MySQL privileges for this user to % any pointers greatly appreciated thanks.

    Read the article

  • want to make my pages end with HTML

    - by user41997
    here is my current .htaccess For security reasons, Option followsymlinks cannot be overridden. Options +FollowSymlinks Options +SymLinksIfOwnerMatch ErrorDocument 404 /404.php RewriteEngine on rewritecond %{http_host} ^jugep.com [nc] rewriterule ^(.*)$ http://www.jugep.com/$1 [r=301,nc] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^peliculas/([^/]+)$ pelicula.php?pelicula=$1 [L] RewriteRule ^descargar/([^/]+)$ descargar.php?descargar=$1 [L] RewriteRule ^peliculas$ peliculas.php [L] RewriteRule ^peliculas/$ peliculas.php [L] RewriteRule ^buscar$ buscar.php [L] RewriteRule ^buscar/$ buscars.php [L] RewriteRule ^contactar$ contactar.php [L] RewriteRule ^contactar/$ contactars.php [L] +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Can someone help me out here, I would like my links to end with a HTML for every link currently a link on my site looks like this (http://www.jugep.com/peliculas/Casino_Royale) I would like it to look like this(http://www.jugep.com/peliculas/Casino_Royale.html) any help is greatly apreciated

    Read the article

  • What are the ways to build a failover cluster?

    - by light
    I have a task where I need to build a failover cluster in two cases: first with servers on Red Hat Enterprise 5.1 and second with SUSE Linux Enterprise 11 SP1. Both cases have SAN. I know there are many ways to build failover cluster, but I can’t find out more, so I need next: The ways to build it? I know only virtualization. Any good book or resource to broad my mind? I’ll be glad to hear any suggestion. Thanks! EDIT #1: failover of servers with bussiness application on it. EDIT #2: will be great to hear summary about solutions with SLES servers? EDIT #3: So if I understand correctly, in my cases the main ways are to use internal solutions or virtualization. So now I have additional questions: Does manufacturer of blades provide some solution? For example HP or IBM. (Without virtualization) Do I need additional server to control "heartbeat" between main and redundant servers? (Virtualization) For example I have several physical servers with VMs. Do I need additional server to control availability of VMs and to move VMs to another physical server in the case their physical server failure? Sorry for my poor English. EDIT #4: Failover of VM or OS on physical server. In both cases will be used SAN , it's not specified, but I think with file system image on it. I started to think that my question is stupid and I need to remake it.

    Read the article

  • Which IMAP servers support both messages and subfolders in a folder?

    - by user43516
    Hi, I currently have a POP server for email, which is delivered to a Thunderbird client. This Thunderbird client stores email locally into a hierarchy of subfolders containing both emails and subfolders. I now want to access this locally stored email from other clients. My first solution was to create an IMAP mailbox and move all the messages to the IMAP server. I wasn't able to do it because my IMAP server refused to create subfolders in folders already containing email. As the original hierarchy is quite complex, I can't modify it to try to get along with this limitation. Are there any Linux IMAP server that would accept having both messages and subfolders in a folder?

    Read the article

  • Open source app to manage and run commands on cloud servers? [closed]

    - by Mark Theunissen
    I'm creating a SaaS platform, and I need a component / library that can create, delete and store the connection details for cloud servers. It also needs to support executing shell commands on these servers and returning the response to the caller. I want a central database of servers and their configuration, plus the ability to reach out and manage the servers via SSH execution of bash scripts. I don't want something that needs agents on every server like Chef. For example, this command is received by the hypothetical application: CREATE USER server = server12345 name = myuser It's translated into the following set of actions and executed by the app, which knows how to connect to server12345, and how to create a user on that server: $ ssh root@server12345 $ adduser myuser And returns the output from the shell: Added user myuser. I've done research on Google and can't quite quite find something that does this already. I've found: fabric This part handles the executing of the shell commands very elegantly, and can take multiple server definitions, but it's supposed to be a deployment tool so doesn't do everything that would be required above - for example, it doesn't have a daemon mode where it listens for commands - it expects to be executed on the shell. It also can't provide the central database functionality. libcloud This library can handle the server admin (CRUD) part, but doesn't have a command interface daemon either, and doesn't let you execute commands on the servers. I guess I need something that is a combination of libcloud, fabric and django for an API. Or something else that does that same thing regardless of language. Overmind Overmind is a GUI and wrapper around libcloud, but doesn't support the command execution part. What am I missing here?

    Read the article

  • NFS default to 777

    - by ipengineer
    I have an NFS share. This share is shared between several different applications. Our web server is running PHP and when it creates directories it is not setting the permissions correctly so it cannot write to the directory once created. How can I mount this NFS share to where PHP has full read/write access? Below is the directory that has been created along with the media server export options and the mount options on the web server. Ideally I could set the permissions on /opt/mount and whatever group/user is on that directory when I mount to that point the share assumes those permissions. dr----x--t. 2 nobody nobody 4096 Jun 5 2014 user_2 Mount output: media.dc1:/home/fs_share on /opt/mount type nfs (rw,vers=4,addr=10.10.20.127,clientaddr=10.10.20.42) Exports file from media server: /home/fs_share 10.10.20.0/255.255.255.0(rw,sync,no_root_squash)

    Read the article

  • Apache Connection vs. Request

    - by user101570
    I apologize in advance if this is a basic question, but I am quite confused after reading the Apache documentation and other tutorials. Does a single Apache prefork process serve all HTTP requests for a given client? That's what I thought, but when I reduce maxclients down to a low number, my page load times go to a crawl. This despite the fact I'm the only client on the server in question. This would suggest each process serves a single HTTP request at a time, rather than serving all requests within the TimeOut window. So if a single webpage requires 15 HTTP requests to load fully, do I require 15 prefork Apache processes to optimally serve it?

    Read the article

< Previous Page | 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311  | Next Page >