Search Results

Search found 19615 results on 785 pages for 'apache config'.

Page 527/785 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • Connect to development server via switch?

    - by letseatfood
    I apologize upfront if this isn't appropriate for superuser. I asked it on serverfault and was told it was too general for that site. I have a laptop with Vista and it receives it's internet connection wireless. I have an old desktop with Ubuntu 10.04 Server Edition installed and it is setup as a development server with Apache, MySQL, and PHP. This works well, with the server having a static IP address. The server is connected to the router via an Ethernet cable. Is it possible to disconnect the cable that is currently connecting the server and the router, then connect the server and workstation to a switch, so that I am still able to connect to the server via web browser from the workstation? I am trying to eliminate the long ethernet cable connecting the server to the router in the other room. I already have a switch, that is why I asked specifically about it. Thanks and I will be more than happy to clarify!

    Read the article

  • Mod_jk Tomcat VirtualHost

    - by user37143
    Hi, I have two applications in Tomcat app1 and app2. I have mod_jk configured for Apache front end and I am able to get the Tomcat index.jsp Now I created two virtualhosts for app1 and app2 so that app1.domain.com will point to app1 in tomcat and app2.domain.com will point to app2 in Tomcat but it's not working. I have the Vhost as ServerName www.app1.domain.com ServerAlias app1.domain.com DocumentRoot "/opt/tomcat/webapps/app1" DirectoryIndex index.jsp Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all The following section added for Jk JkMount /.do ajp13 JkMount /.jsp ajp13 JkMount / ajp13 JkMount /* ajp13 JkUnMount /.php ajp13 JkUnMount /.gif ajp13 JkUnMount /.html ajp13 JkUnMount /.css ajp13 JkUnMount /.png ajp13 JkUnMount /.jpg ajp13 # But this did not work both the sub domains loads Tomcat's index.jsp. Can some one help me? Thanks

    Read the article

  • hudson/jenkins: help needed to get started with customization work

    - by user64204
    I'm would to customize jenkins by adding links to the left hand side panel and use the pages associated with these links to serve some custom content in place of the jobs/views table displayed by default. I managed to add links to the side-bar using the sidebar-links plugin. Now I'm trying to see how to replace the content of the <td id="main-panel"> element with some custom content. The custom content is generated by some PHP scripts which ideally should be called by hudson every time the custom pages are requested, though if too complicated I can either create static content to be served by jenkins by calling my PHP scripts in a crontab or see if calls to the PHP scripts can be done by apache itself before the page requests are sent to jenkins. I'm not sure writing a plugin is the best way to proceed and I would like to have your thoughts as to how you think I should implement this.

    Read the article

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

  • Installed Windows 8 Upgrade AFTER Formatting HD - any way to activate?

    - by Brandon Vogel
    I had an XP system - formatted the HD then ran Windows 8 Upgrade install. Install was fine. It cannot activate and gives me the 'this is an upgrade' error. Is there ANY way to fix this (MS Support call or something?) before I scrap the entire Windows 8 install, Reinstall XP, then upgrade to Windows 8 the proper way? I hate to waste the day's worth of config-the-new-os time if not absolutely necessary. Error was the 0XC004F061 from Windows 8.

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Restore access to Cisco Connect after changing router settings

    - by StasM
    I have recently bought Cisco Valet Plus (M20) wireless router (which I recognize now was a mistake, but nevermind). It has two setup options - Cisco Connect software and web-based setup. Cisco Connect software allows changing very small set of settings, web-based setup allows access to almost all settings, except settings for guest network. The problem is that when I use web-based setup, Cisco Connect after some changes refuses to talk to the router, so I can't change guest settings anymore (since web interface doesn't allow to change them). It must be because of some config parameter not matching or some password set wrong - but I don't know where Cisco Connect stores them. So, does anybody have any idea how to make Cisco Connect talk to the router again once I changed the settings through the web interface?

    Read the article

  • Can't change permission/ownership/group of external hard drive on Ubuntu.

    - by MikeN
    I have an external hard drive connected to my Linux box. I wanted to setup a web server to access files on it, but the permission on all files and directories on the drive are "rwx" for the owner which is my local login, and the group is the "root" group. I need the files readable by the apache user, I was trying to set all files to be "chmod a+rwx -R *", but this doesn't do anything (gives no errros, just has no effect.) I tried chaning the group using "chgrp" to my user group, but that won't work either, it gives me errors that I lack permission even when I run all those commands as sudo! What's up with this hard drive??? "sudo chmod a+rwx *" should work on anything, right?

    Read the article

  • Making nginx withstand flood attacks

    - by Tiffany Walker
    How can I make it stand stand against attacks better? Are their plugins. Looking for a way to RATE LIMIT and remain up and not slow down. My Setup: user nobody; # no need for more workers in the proxy mode worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; worker_priority -2; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 40480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 9; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 194.145.208.19:80; server_name ipxnow.in www.ipxnow.in; access_log /usr/local/apache/domlogs/ipxnow.in-bytes_log bytes_log; access_log /usr/local/apache/domlogs/ipxnow.in combined; root /home/ipxnowin/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location @backend { internal; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ /\.ht { deny all; } } and proxy.inc: proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Create a folder shortcut/redirection in Vista

    - by Ellipsis
    Given a plain old directory of files in Windows Vista, say "C:\OldFolder\", is it possible to move the contents of that directory to a new location, perhaps "F:\NewFolder\" and keep a shortcut-like 'virtual' folder at C:\OldFolder that will always redirect access to the updated location. Shortcuts work for users accessing through the GUI to some extent, but all application links to the old location won't work anymore even with a shortcut. For example, If MS Word tried to access C:\OldFolder\document.doc I would want Windows to simply rewrite it's request to F:\NewFolder\document.doc... I guess I'm basically looking for Apache's Mod_Rewrite for Windows Vista... any suggestions?

    Read the article

  • Dynamically loading chef recipies from URLs

    - by andy
    I'm deploying a web app on AWS. I intend to use chef to build AMIs which I'll then put into production. I want to have Chef monitor a URL stored in simpleDB. The URL would point to a tarball in S3. There would be different URLs, one for a config tarball, one for a code tarball. When I update the URL in simpleDB, I want chef to spot this and pull in and apply the configs/deploy the code. Is this possible? Has anything like this been done before or would I need to roll my own code? I think Chef can monitor URLs, but how would be the best way of getting it to load that URL from simpleDB?

    Read the article

  • nginx not acknowledging passenger_root option

    - by lowgain
    I'm running a sinatra app on passenger, and trying to hook it to nginx. The relevant part of my config looks like: http { passenger_root /path/to/gem; #this is a valid path passenger_ruby /path/to/ruby; #ditto #etc... server { listen 80; server_name hello.org; root /path/to/stuff/public; passenger_enabled on; } } Whenever I start nginx however, I get: Starting nginx: [alert]: Phusion Passenger is disabled becasue the 'passenger_root' option is not set. Please set this option if you want to enable Phusion Passenger. What am I missing here? Thanks!

    Read the article

  • VSFTPD uploaded file permissions

    - by Katafalkas
    Let me first say that there are loads of topics regarding this, and I am sure i have seen them all by now. Still non of the solutions seem to help. I installed vsftpd. created a user ftp-data. Now I need that files uploaded by user ftp-data would have 755 permissions. Solving this should be as easy as adding: local_umask=002 file_open_mode=0755 but that did not help, then I have tried a number of variations of this, still did not help. The I added: chmod_enable=YES still did not help. At the moment I think that I am missing something very simple and obvious, just cant find it. Maybe someone could help me to find what I am missing. This is my config file: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=002 anon_upload_enable=NO anon_mkdir_write_enable=NO dirmessage_enable=NO xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/xferlog listen=YES local_root=/var/www/ftp-gallery pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • Trouble accessing network drives in Windows 7

    - by Warlax
    Hi, Recently purchased a LaCiE network drive. Connected it to my router, configured it, added a user for myself. Installed the "Network Assistant" that came with it - no problem. Went to My Computer Network, found the network drive - was prompted for a login/password to see the contents, types \user and password (leading slash to get rid of DOMAIN) and accesses contents. There is one open share that I can access/mount without problem and there's another share with my user name as set-up via the network drive's web-based config. Double-clicked on that, prompted again. user/password doesn't work! It says I have no permissions but the user/pass combination is PERFECT. Some Windows Vista posts talk about a Local Security Policy menu - but that's nowhere to be found. Any ideas? (Windows 7 Professional BTW).

    Read the article

  • htaccess IP blocking with custom 403 Error not working

    - by mrc0der
    I'm trying to block everyone but 1 IP address from my site on a server running apache & centos. My setup is follows the example below. My server: `http://www.myserver.com/` My .htaccess file <limit GET> order deny,allow deny from all allow from 176.219.192.141 </limit> ErrorDocument 403 http://www.google.com ErrorDocument 404 http://www.google.com When I visit http://www.myserver.com/ from an invalid IP, it gives me a generic 403 error. When I visit http://www.myserver.com/page-does-not-exist/ it redirects me correctly to http://www.google.com but I can't figure out why the 403 error doesn't redirect me too. Anyone have any ideas?

    Read the article

  • Adding CNAME entry to nginx for cdn rewrite

    - by Ayaz Malik
    I am using apache + nginx (for serving static content) and just bought a CDN. I have added a CNAME entry to my CDN URL, which should be cdn.example.com and pointing to the original cdn url. xxx.netdna-cdn.com/ But probably because of my nginx vhost file when I run cdn.example.com, it opens the first server entry site in my vhost file. I have multiple sites in my server. I have added CNAME from CPanel DNS editor as well. No luck, so I think I need to add something in the vhost.conf.

    Read the article

  • IIS Redirect a sub directory to an external URL

    - by Will Hancock
    Hi forgive my ignorance for I am a humble client side developer... I am a webapp made up of static HTML and JS. But I want to call an external service via AJAX, this causes some issues with CORS or Cross Domain policy on the browser. So I need to make a request to MY server http://dev.webapp.com/service So I want to redirect the /service on the server to http://externaldata.com/service And return the result. The Mac boys have acheived this in Apache with a proxy pass: ProxyPass /service http://externaldata.com/service Can anybody help with how to do this in IIS? I have found articles about ARRs and Reverse Proxy, terms that are alien and seem too complicated. As a humble webdev can I do this using IIS GUI?

    Read the article

  • Getting Pango-WARNING: Invalid UTF-8 string passed to pango_layout_set_text()

    - by geerlingguy
    About three days ago, I noticed the exim mailqueue started filling up on one of my servers, and upon inspecting some of the emails using # exim -Mvb $ID, I noticed they were being sent to some system email address (which is not a real address), and the body of the messages were as follows: (process:8259): Pango-WARNING **: Invalid UTF-8 string passed to pango_layout_set_text() I'm wondering what could be causing this strange issue, as I've never heard of 'pango' at all... I've never seen that function used in my lifetime! It seems the process id (PID) is for an apache process, though, as the pids are always gone by the time I use # ps -aux to look them up. Edit: Whoops! Forgot to include the subject - looks like it's actually munin-cron that's bringing up the issue: Subject: Cron /usr/bin/munin-cron --force-root

    Read the article

  • Why do I have to manually 'Restart Management Network' on vSphere 5 host after reboot to get networking available?

    - by growse
    I've got a couple of vSphere 5.0 hosts in a small lab environment here and I've noticed a strange behaviour. When on of the hosts gets rebooted, it is unresponsive to the network until I log into the ESX console, Press F2 to customize and select Restart management network. Once this is done, the networking works perfectly as expected. Each host has two NICs which are trunked together using Etherchannel to a Cisco 3750. The link is also a .1q VLAN trunk and the management network is configured on VLAN121 with the VM traffic configured on VLAN118. Why would the host be completely dead to the world until I physically kick it? Edit Sample switch config for trunk: interface Port-channel2 description Blade 1 EtherChannel Trunk switchport trunk encapsulation dot1q switchport mode trunk end ! ! interface GigabitEthernet4/0/1 description Bladecenter1 CPM 1A switchport trunk encapsulation dot1q switchport mode trunk speed 1000 duplex full channel-group 2 mode on end Vswitch teaming settings: Management port group settings:

    Read the article

  • Connection string during installation

    - by anon2009
    Hi, I've been convinced to use windows setup files(msi) for the installation of my new windows forms application after I asked a question here and got some excellent answers (thank you all): http://serverfault.com/questions/97039/net-application-deployment Now i have a different question: My application will need to access a SQL Server to provide users with data, which means that the connection string must be stored in the client's app.config file. How should I handle this? During installation, the user enters the connection string to that database? How they get the connection string? In an email from the admin? What if the admin wants to use SQL authentication and need to put the user info at the connection string? So you know, the app will be sold via the internet, so I don't have any access to the admins or the network. Any suggestions? Thanks in advanced.

    Read the article

  • OSSEC is not running

    - by batman
    I have an two ec2 instances. In one I have installed ossec server and in other I have installed ossec agent. Here are my server config INBOUND (security group/firewall) : port:514 source:0.0.0.0/0 port:1514 source:0.0.0.0/0 But it seems to be not working. In my agent log file I keep on getting: 2012/08/28 06:52:52 ossec-agentd: INFO: Using IPv4 for: x.x.x.x.x.x . 2012/08/28 06:53:13 ossec-agentd(4101): WARN: Waiting for server reply (not started). Tried: 'x.x.x.x.x'. Edit: Running sudo netstat --inet -nlp | grep ossec. I'm getting: udp 0 0 0.0.0.0:1514 0.0.0.0:* 26027/ossec-remoted Where I'm making the mistake?

    Read the article

  • Where can I find logs for SFTP?

    - by Jake
    I'm trying to set up sftp-server but the client is getting an error, Connection closed by server with exitcode 1 /var/log/auth.log (below) doesn't help much, how can I find out what the error is? I'm running Ubuntu 10.04.1 LTS sshd[27236]: Accepted password for theuser from (my ip) port 13547 ssh2 sshd[27236]: pam_unix(sshd:session): session opened for user theuser by (uid=0) sshd[27300]: subsystem request for sftp sshd[27236]: pam_unix(sshd:session): session closed for user theuser Update: I've been prodding this for a while now, I've got the sftp command on another server giving me a more useful error. Request for subsystem 'sftp' failed on channel 0 Couldn't read packet: Connection reset by peer Everything I've found on the net suggests this id a problem with sftp-server but when I remove the chroot from sshd config I can access the system. I assume this means sftp-server is accessible and set up correctly.

    Read the article

  • Can't ping other machines at Linux VPN PPTP server's local lan from outside

    - by Marco Sanchez
    Before anything else, hello guys, this is the first time I ask for something here so I hope someone can give me a hand, please look at the following network diagram: --------------------------------------------------------------- VPN Server Webserver (SuSE SLES11) | | | ------- VPN LAN -------- | Router with Unique IP (With Port Forwarding rules set and VPN through enabled) | PPTP connection over Internet | Workstation (PC or Laptop with Windows) --------------------------------------------------------------- So the idea is for the workstation to connect to the PPTP Server and then be able to access a Web Application on the Webserver, right now I have the PPTP server configured and the VPN works, I can connect to the SLES11 server with no problems from the workstation and I can ping it and everything works fine but if I try to ping the Webserver from the workstation, I can't reach it, I'm making a mistake somewhere but I don't see where, please note that I'm not a network expert and thus I'd greatly appreciate some specific guidance. Here is some info related to the IPs --------------------------------------------------------------- *** SLES11 VPN Server has 2 Network cards: -- eth0 (Internal Network) IP: 192.168.210.5 MASK: 255.55.255.0 -- eth1 (External Network) IP: 192.168.1.105 MASK: 255.55.255.0 *** Webserver has 1 network card -- eth0 (Internal Network) IP: 192.168.210.221 MASK: 255.55.255.0 *** Workstation -- IP info once connection has been established to the VPN PPP adapter Test VPN Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Test VPN Connection Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.210.110(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 189.209.208.181 (Defined as part of the PPTP Server options config script) 189.209.127.244 Primary WINS Server . . . . . . . : 192.168.210.220 (Defined as part of the PPTP Server options config script) NetBIOS over Tcpip. . . . . . . . : Enabled --------------------------------------------------------------- I also defined the following within IP tables: ------------------------------------------------------------- iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A INPUT -i eth0 -p tcp --dport 1723 -j ACCEPT iptables -A INPUT -i eth0 -p gre -j ACCEPT ------------------------------------------------------------- If you need any piece of information from the PPTP server scripts please let me know, the thing is that I can actually connect to the VPN server and access its services and everything but after that I can't reach any other computer on that LAN. Any help would be greatly appreciated and thanks in advance

    Read the article

  • Requesting better explanation for etag/expiration of favicon.ico

    - by syn4k
    Following this article: Configuring favicon with expires header in htaccess Using YSlow, I keep getting: (no expires) http://devwww.someplace.com/favicon.ico Also, YSlow indicates: Grade C on Configure entity tags (ETags) for the same file. My relevant config (.htaccess): # Configure ETags FileETag MTime Size <IfModule mod_expires.c> # Enable Expires Headers for this directory and sub directories that don't override it ExpiresActive on # Set default expiration for all files ExpiresDefault "access plus 24 hours" # Add Proper MIME-Type for Favicon AddType image/x-icon .ico # Set specific expriation by file type ExpiresByType image/x-icon "access plus 1 month" ExpiresByType image/ico "access plus 1 month" ExpiresByType image/icon "access plus 1 month" </IfModule> As you can see, I am setting both, etags and expiration however, both seem to be ignored. Yes, mod_expires is being loaded by my Apache configuration.

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >