Search Results

Search found 12484 results on 500 pages for 'host'.

Page 412/500 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Linux/hostapd: AP can ping clients, clients can access internet, can't access www@wlan1 with more than 5-6 packets at once

    - by mhambra
    Please edit the title, can't make it sound better. -- OP. Hi all, I have a Wifi USB dongle in a PC, that serves as an AP for laptop. wlan1: 192.168.2.1, netmask 255.255.255.0, routed: route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.2.1 ping 192.168.2.2 (laptop): ping was ok for lot of packets. Now, I try to access 192.168.2.1:80/myindex.html (apache) from laptop, and can see that own 1kb test page. But, trying to access 192.168.2.1:80/my.jpg, I see the following: GET /my.jpg HTTP/1.1 200 OK <jpg header, about a kilobyte> <TCP packet retransmisson> <TCP packet retransmisson> <end of stream> It seems to be a hostapd's problem (networked stuff worked fine with Ad-Hoc), but it may be also forwarding/routing problem too. What to google for? Even more strange, SSH to that host works fine.

    Read the article

  • EMC VNX iSCSI setup - unsure about SP/port assignment

    - by pauska
    We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI. From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them? Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports? It also gives the following example: Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)? Edit: Another brainpuzzle I don't get it - each host (as in server) should not have more iSCSI bandwidth available than the storage processor. What on earth does this matter? If serverA have 1gbit and serverB have 100mbit, then the resulting bandwith between them is 100mbit. How can this result in some kind of oversubscription? Edit4: Wait, what. Active and passive ports? The VNX runs in a ALUA configuration with asymmetrical active/active.. there shouldn't be any passive ports, only preferred ones..

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • ESXi 5 Guests will not boot

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. Note: Investigation eventually determined that problem was with the VMware client on a Lenovo Edge laptop, the only Windows box available in a Linux IT shop. vSphere Client v4 and v5 duplicated behavior on the Lenovo Edge. As indicated in the comment to the accepted answer, replacing the workstation with one using different video was the "fix" for this particular issue. The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. No errors in the Events tab. Switching Guest from BIOS to EFI makes no difference. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspicious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space. Edit 4: Guests copied from my old ESXi 4 server DO NOT boot on the ESXi 5 system. Initially it complains about too little Video RAM being configured for the default 2500x1600, but it still doesn't work properly even after I bump the Video RAM settings or switch it to Auto-Detect.

    Read the article

  • NTOP gives warnings on startup

    - by FR6
    I just installed ntop 1.4.4 and when I start it, it give me infinite warnings "packet truncated": ... RRD_DEBUG: umask 0066 RRD_DEBUG: DirPerms 0700 THREADMGMT: RRD: Started thread (t2992630672) for data collection THREADMGMT[t2992630672]: RRD: Data collection thread starting [p30923] INIT: Created pid file (/var/run/ntop.pid) THREADMGMT[t3086329552]: ntop RUNSTATE: INITNONROOT(3) Now running as requested user 'nobody' (99:99) Note: Reporting device initally set to 0 [eth0] (merged) THREADMGMT[t3086329552]: ntop RUNSTATE: RUN(4) THREADMGMT[t2982140816]: NPS(1): Started thread for network packet sniffing [eth0] THREADMGMT[t2982140816]: NPS(eth0): pcapDispatch thread starting [p30923] THREADMGMT[t2982140816]: NPS(eth0): pcapDispatch thread running [p30923] THREADMGMT[t3047009168]: SIH: Idle host scan thread running [p30923] THREADMGMT[t3057499024]: SFP: Fingerprint scan thread running [p30923] **WARNING** packet truncated (8814->8232) **WARNING** packet truncated (10274->8232) **WARNING** packet truncated (8814->8232) **WARNING** packet truncated (8814->8232) ... Do I need to configure something? I tried to access the web interface (http://localhost:3000) but it does not work. Note: I'm on CentOS. EDIT: Not sure if it helps but there is my "ifconfig": eth0 Link encap:Ethernet HWaddr 00:16:76:BC:7E:77 inet addr:192.168.0.221 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::216:76ff:febc:7e77/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15496640 errors:0 dropped:0 overruns:0 frame:0 TX packets:19256813 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:836230629 (797.4 MiB) TX bytes:608496148 (580.3 MiB) Memory:dffe0000-e0000000

    Read the article

  • Webserver python update script

    - by ThePyCoder
    So i have made this website on which you can trade stocks based on real stock quotes with virtual money. The stock quotes are in a MySQL database and are updated using a python script which runs every minute or so. Now, this works fine on my local machine with xampp but how about moving the project to a commercial web server? Basically I want my page hosted by a professional company but do those kind of servers support python scripts running in the background? Because a dedicated server would be to expensive and the script does some other sql tasks too so it can't be replaced by PHP or so... So, are there any good web hosting services out there who give me the possibility of running a script in the background and hosting a website in the foreground? For what server specifications do i have to look for? Thnx in advance! PS: I've done some research, and I found a python supporting web host WITH ssh support. Is that what I need? Or is the ssh not allowed to start processes?

    Read the article

  • site to listen on port 88

    - by JohnMerlino
    I want to get one of my sites to listen on port 88. In ports.conf in /etc/apache2 on ubuntu server, I add so web app can listen on port 88: NameVirtualHost *:80 Listen 80 NameVirtualHost *:88 Listen 88 I have this in my etc/apache2/apache2.conf, I have this: # Include the virtual host configurations: Include sites-enabled/ Under sites enabled, I have a file looks like this: Listen *:88 NameVirtualHost *:88 <VirtualHost *:88> ServerName dogtracking.com DocumentRoot /home/doggps/public_html/eaglegps.com/current/public <Directory /home/doggps/public_html/eaglegps.com/current/public> AllowOverride all Options -MultiViews </Directory> <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> </VirtualHost> Then I try to restart apache: /etc/init.d/apache2 restart And I get: * Restarting web server apache2 /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:21 2012] [warn] NameVirtualHost *:88 has no VirtualHosts /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:22 2012] [warn] NameVirtualHost *:88 has no VirtualHosts (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Action 'start' failed.

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Apache Server not working in MAMP

    - by jasonaburton
    Here's what I did before the problem started: I was creating a database for a site that I am working on in phpMyAdmin. I wrote some code to try to connect to the database I just created and I couldn't connect. I assumed it might be because I needed a password to connect to the database, so I created a password for it. Immediately after I created the password phpMyAdmin kicked me out saying: "#1045 - Access denied for user 'root'@'localhost' (using password: YES)" "phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server." I found the php.ini file and searched for where I could change the password to match the one I just made, but couldn't find where I needed to change it. So I decided to scrap the database and uninstall MAMP from my computer and reinstall it hoping it would just reset all the defaults and I could go on my merry way. But now after reinstalling MAMP and trying to run the servers Apache won't start up and I have no idea why. One problem after another... Any advice or helpful ideas?

    Read the article

  • RTNETLINK answers: File exists... maybe because assigned a new mac adress

    - by steven
    I got a "RTNETLINK answers: File exists Failed to bring up eth0:1" on "ifup eth0:1". I suspect it happens because i assigned a new mac adress in my VM's network adapter. Can you tell me how to fix the issue? My configuration looks like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.1.80 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1 # Alias being connected to 192.168.10.x Network auto eth0:1 allow-hotplug eth0:1 iface eth0:1 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 Why do I get "RTNETLINK answer: File exists.." suddenly? I worked with this configuration before without problems. All i did in the past is to renew the adapters mac adress. At the moment I am connected to the 192.168.10.x Network and if I do /etc/init.d/networking stop /etc/init.d/networking start then i got "RTNETLINK [...] falied to bring up eth0:1" but the strage thing is that i am able to connect to 192.168.10.83 via ssh from my host machine. But I cannot reach the internet from the debian client. I hope it is clear what my problem is, now. update if i change my /etc/network/interfaces like this then "ifup eth0" fails, too with the same error! # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 with verbose option enabled i got: Configuring interfache eth0=eth0 (inet) run-parts --verbose /etc/network/if-pre-up.d ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0 RTNETLINK answers: File exists Failed to bring up eth0. same if i type this manually: ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0

    Read the article

  • Issues with Apache redirect to www-prefixed URL

    - by lamp_scaler
    I have a website with domain mysite.com. I would like to have it so that if a user types in "mysite.com" it will redirect to "www.mysite.com". Additionally, "mysite.com/subdir" will also redirect to "www.mysite.com/subdir". I've looked and made changes with vhosts and also rewrites, but it's not working for the "mysite.com/subdir"-"www.mysite.com/subdir" case. Every time I type in "mysite.com/subdir", it will redirect to "www.mysite.com". Only "www.mysite.com/subdir" works. Not sure how to troubleshoot this. I turned on rewrite logs and didn't see anything obvious, yet. This is my config files so far. Please let me know what I'm missing. Thanks! FYI: I'm using CentOS 5.4, nginx 1.2.0 on top of Apache 2.2.3. The site itself is built with CodeIgniter framework. http.conf: ServerTokens Prod ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 10 ServerLimit 256 MaxClients 60 MaxRequestsPerChild 10000 #StartServers 8 #MinSpareServers 5 #MaxSpareServers 20 #ServerLimit 256 #MaxClients 256 #MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Listen 69 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule mem_cache_module modules/mod_mem_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so #rpaf settings LoadModule rpaf_module modules/mod_rpaf-2.0.so RPAFenable On RPAFproxy_ips 127.0.0.1 RPAFsethostname On # The header where the real client IP address is stored. RPAFheader X-Forwarded-For Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost ServerName www.mysite.com:80 UseCanonicalName Off DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <IfModule mod_userdir.c> UserDir disable </IfModule> DirectoryIndex index.html index.html.var AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> <DirectoryMatch "^/.*/\.svn/"> Order deny,allow Deny from all </DirectoryMatch> TypesConfig /etc/mime.types DefaultType text/plain <IfModule mod_mime_magic.c> MIMEMagicFile conf/magic </IfModule> HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature Off Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <IfModule mod_dav_fs.c> DAVLockDB /var/lib/dav/lockdb </IfModule> ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif #AddDescription "GZIP compressed document" .gz #AddDescription "tar archive" .tar #AddDescription "GZIP compressed tar archive" .tgz ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en zh-CN zh-TW ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType text/x-component .htc AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> </IfModule> </IfModule> BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully vhost.conf: NameVirtualHost *:69 <VirtualHost *:69> ServerName mysite.com ServerAlias vip.mysite.com Redirect / http://www.mysite.com/ </VirtualHost> <VirtualHost *:69> DocumentRoot /home/mysite/mysite/www ServerName www.mysite.com </VirtualHost> <VirtualHost *:69> DocumentRoot /home/mysite/mysite/www/assets ServerName static.mysite.com </VirtualHost> <VirtualHost *:69> DocumentRoot /home/mysite/admin/www ServerName admin.mysite.com </VirtualHost> <VirtualHost *:69> DocumentRoot /home/other/trunk/www ServerName othersite.com ServerAlias www.othersite.com </VirtualHost> <VirtualHost *:69> DocumentRoot /var/www/html ServerName test.mysite.com ServerAlias test2.mysite.com </VirtualHost> /home/mysite/mysite/www/.htaccess: RewriteEngine on # In my case all CI files are outside this web root, so we can # allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # hide index.php RewriteRule .* index.php/$0 [L] # BEGIN Compress text files <ifModule mod_deflate.c> <filesMatch "\.(css|js|x?html?|php)$"> SetOutputFilter DEFLATE </filesMatch> </ifModule> # END Compress text files # BEGIN Expire headers <ifModule mod_expires.c> ExpiresActive On ExpiresDefault "access plus 1 seconds" ExpiresByType image/x-icon "access plus 2592000 seconds" ExpiresByType image/jpeg "access plus 2592000 seconds" ExpiresByType image/png "access plus 2592000 seconds" ExpiresByType image/gif "access plus 2592000 seconds" ExpiresByType application/x-shockwave-flash "access plus 2592000 seconds" ExpiresByType text/css "access plus 604800 seconds" ExpiresByType text/javascript "access plus 604800 seconds" ExpiresByType application/javascript "access plus 604800 seconds" ExpiresByType application/x-javascript "access plus 604800 seconds" ExpiresByType application/xhtml+xml "access plus 600 seconds" </ifModule> # END Expire headers # BEGIN Cache-Control Headers <ifModule mod_headers.c> <filesMatch "\.(ico|jpe?g|png|gif|swf)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\.(css)$"> Header set Cache-Control "max-age=604800, public" </filesMatch> <filesMatch "\.(js)$"> Header set Cache-Control "max-age=604800, private" </filesMatch> </ifModule> # END Cache-Control Headers # BEGIN Turn ETags Off <ifModule mod_headers.c> Header unset ETag </ifModule> FileETag None # END Turn ETags Off /etc/nginx/conf.d/default.conf: server { listen 80; server_name static.mysite.com; location / { root /home/mysite/mysite/www/assets; index index.html index.htm; expires max; } } server { listen 80; server_name *.mysite.com www.mysite.com vip.mysite.com; #Set this larger if uploading big files client_max_body_size 5m; location / { proxy_pass http://127.0.0.1:69; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; #client_max_body_size 10m; client_body_buffer_size 128k; proxy_buffer_size 4k; proxy_buffers 4 32k; } }

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Login error in phpMyAdmin, problem setting auth_type in config.inc.php

    - by sergiom
    I'm having a problem accessing phpMyAdmin. A few weeks ago I did succeed configuring it for auth_type = 'cookie', but I still receive an error stating that I should have to set blowfish_secret. That was strange because it was set. So I changed auth_type from cookie to http, but it didn't work. I changed it back to cookie, but it doesn't work anymore. this is the error. phpMyAdmin - Error Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly. this is my C:\wamp\apps\phpmyadmin3.2.0.1\config.inc.php <?php /* Servers configuration */ $i = 0; /* Server: localhost [1] */ $i++; $cfg['Servers'][$i]['verbose'] = 'localhost'; $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['port'] = ''; $cfg['Servers'][$i]['socket'] = ''; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['user'] = ''; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['Servers'][$i]['blowfish_secret'] = 'this is my passphrase'; /* End of servers configuration */ $cfg['DefaultLang'] = 'en-utf-8'; $cfg['ServerDefault'] = 1; $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; ?> I changed the blowfish_secret, since I don't remember the old one, and I deleted the cookies in my browser and restartd all wamp services and the browser. After I enter username and password in the login page I get the error. I've tried searching into the log files, but I'm a newbie and I'm not sure I've searched the right ones. I'm using Wamp server 2.0 that has Apache Version : 2.2.11 PHP Version : 5.3.0 MySQL Version : 5.1.36 phpmyadmin : 3.2.0.1

    Read the article

  • What are problems and pitfalls with a public facing Active Directory

    - by Ralph Shillington
    The situation that i'm faced with is this: We plan on using a number of server applications hosted on Amazon EC2 machines, mainly Microsoft Team Foundation Server. These services rely heavily on Active Directory. Since our servers are in the Amazon cloud it should go without saying (but I will) that all our users are remote. It seems that we can't setup VPN on our EC2 instance -- so the users will have to join the domain, directly over the internet then they'll be able to authenticate and once authenticated, use that token for accessing resources such as TFS. on the DC instance, I can shut down all ports, except those needed for joining/authenicating to the domain. I can also filter the IP on that machine to just those address that we are expecting our users to be at (it's a small group) On the web based application servers, I imagine all we need to open is port 80 (or 8080 in the case of TFS) One of the problems that I'm faced with is what domain name to use for this Active directory. Should I go with "ourDomainName.com" or "OurDomainName.local" If I choose the latter, does that not mean that I'll have to get all our users to change their DNS address to point to our server, so it can resolve the domain name (I guess I could also distribute a host file) Perhaps there is another alternative that I'm completely missing.

    Read the article

  • HTTP cache for my virtual machines

    - by MathematicalOrchid
    I have several Linux virtual machines running on my home PC. One of the quirks of Linux is that every time you run a package manager, it wants to "refresh" the configured software repositories - which basically means it wants to download a file from the Internet. If I revert to an earlier snapshot of the VM, then next time I run the package manager it will re-download the exact same data again [since it no longer exists in the VM]. It seems a shame to waste bandwidth endlessly downloading the same data over and over again, so I was wondering if there's some way I can set up some kind of HTTP proxy server that caches downloaded files. I have no idea how you would do such a thing though. In particular, it needs to be set up so that the VMs don't need to "know" that the cache is there; it needs to be transparent. But I don't know how to do that. Any suggestions on what software I'd need to use? It would be nice if I could run it under the Windows host OS, but running a small VM with a Linux guest is also possible...

    Read the article

  • What is proper relationship between /etc/hosts and DNS A records for a Linux server?

    - by MountainX
    I have an Ubuntu server. It is going to be a web server with a URI of www.example.com. I have a DNS A record pointing www.example.com to the server's IP address. Let's say I pick "trinity" as the hostname for this server. I want to set up the DNS records correctly. I need reverse DNS to www.example.com, so a CNAME for www.example.com doesn't seem appropriate. Here's my question: Is it considered best practice to set up two DNS records (which in my case would likely be two A records), one for www.example.com and one for trinity.example.com, both pointing to this server's IP address? (Or, even if it is not accepted as a best practice, is it a good idea?) If so, would the following be a proper /etc/hosts file? $ cat /etc/hosts 127.0.1.1 trinity.local trinity 99.100.101.102 trinity.example.com trinity www.example.com This server is a Linode and Linode's docs seem to imply that the above approach is best (if I am reading them correctly). Here's the relevant section. I bolded the line that seems to apply here. Update /etc/hosts Next, edit your /etc/hosts file to resemble the following example, replacing "plato" with your chosen hostname, "example.com" with your system's domain name, and "12.34.56.78" with your system's IP address. As with the hostname, the domain name part of your FQDN does not necesarily need to have any relationship to websites or other services hosted on the server (although it may if you wish). As an example, you might host "www.something.com" on your server, but the system's FQDN might be "mars.somethingelse.com." File:/etc/hosts 127.0.0.1 localhost.localdomain localhost 12.34.56.78 plato.example.com plato The value you assign as your system's FQDN should have an "A" record in DNS pointing to your Linode's IP address. For more information on configuring DNS, please see our guide on configuring DNS with the Linode Manager.

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • How to place a virtual machine in DMZ?

    - by Giordano
    I have an Ubuntu 12.04 server running few virtual machines with KVM. I would like to expose some of these virtual machines on the internet, to make it possible for customers to test the products we're developing and make available other products for demo purposes. One of the server NICs is configured with a public IP. However before exposing anything on the web I would like to be sure that if one of the virtual machines get compromised, the attacker doesn't reach the rest of the hosts. What I would like to do is to put these virtual machines into a DMZ. These are the steps I'm planning to do: Create a tap interface in the virtualization host (let's say tap1) Create a bridge using tap1 and give it an IP in a subnet separate from the other hosts. Let's say 10.0.0.1 Attach the DMZ virtual machines to the bridge and configure their IP statically (10.0.0.2, 10.0.0.3, etc...) Using UFW, forbid any traffic from 10.0.0.0/24 to any of the internal hosts, allow the traffic from the internal hosts towards 10.0.0.0/24 and expose the virtual machines on the web using port forwarding. Do you think this setup is safe? Can you suggest any improvement or a better/safer approach? Thanks in advance!

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • Connect to WEP Wireless Network by command line on Ubuntu

    - by Tim
    Hi, I am a newbie to both network and Linux. I am now trying to connect to a WEP wireless network by command line on my Ubuntu 8.10, because the Network Manager does not support 64 bit WEP. (1) I firstly bring down the Network Manager and then try to connect to a wireless network, whose essid is candy and password is 5673212741. But it fails as shown in the following. I wonder why and how to do it correctly? $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo iwconfig wlan0 essid candy opendo iwconfig wlan0 key 18018ce78e open $ sudo iwconfig wlan0 key 5673212741 open $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 9971 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 20 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 13 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 9 No DHCPOFFERS received. No working leases in persistent database - sleeping. $ ping www.bbc.co.uk ping: unknown host www.bbc.co.uk (2) A less important question: why the scan for wireless networ does not work after I bring down the Network Manager? $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo iwlist wlan0 scan wlan0 Interface doesn't support scanning : Network is down Thanks and regards!

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >