Search Results

Search found 4073 results on 163 pages for 'hosts deny'.

Page 18/163 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Custom host file for Firefox

    - by acidzombie24
    Instead of changing my host file I'd like Firefox to think my domain is on my own server while my other browser uses the real IP address. I used to edit my host files but that forces both browsers to change the IP address. I found change host, but it doesn't appear to use the alternative host file. I also saw a comment asking when it will work on Firefox 6+. I tried Host Admin and it fails. It works but the alternative IP address must be in your host file already (which I don't want) and it lets you deselect a domain so the host file is ignored which is not what I want.

    Read the article

  • What is an Automated way to Access a Virtual Machine's Webserver from its Host Machine? [closed]

    - by Jonnybojangles
    As a Web-Developer what is the most efficient (automated) way to connect to a Virtual Machine (VM) running a development webserver from it’s Host Machine (the machine running the VM) when you do not have control over the networks (home, Startbucks, work, etc) you are connected to? Currently I start my VM (a VirtualBox VM running CentOS), run ifconfig to determine the VM’s current IP. I then take that IP and map it my Host machine’s host file so that I can access the VM’s webserver from the Host. I feel that this is not an efficient way to connect to my VM’s webserver because each time I connect to a new network (a few times a day) I need to repeat the IP look up and host file update, and sometimes restart the VM's network service.

    Read the article

  • Mac has IP address, can connect to router but can't connect outside

    - by partition
    Weird problem, my MacBook can't connect anywhere right now! The router works, it gets an IP, it can log into to the router but it can't resolve anything! The router works as I connected another device to it and it connected to the net. The MacBook doesn't have any strange DNS configurations either, just 192.168.1.1 for the router I even tried tethering it to my phone, and it still would not connect to the net... help?

    Read the article

  • Linux (lsusb) not showing String Descriptors of a USB device

    - by tzippy
    I have an embedded device that when plugged to a linux host, shows up with vid and pid that are not in the usb.ids file (proprietary IDs). However I provide String Descriptors that do show up when plugged into a Windows Host. But not on a Linux Host. lsusb -v shows only iManufacturer 3 iProduct 2 iSerial 1 But on the device side, when processing the setup requests, I see that the Strings are actually requested by the Host. By Windows and also the Linux Host. The USB Device Tree Viewwer under Windows shows this output: iManufacturer : 0x01 Language 0x0409 : "My Manufacturer" iProduct : 0x02 Language 0x0409 : "MyProduct" iSerialNumber : 0x03 Language 0x0409 : "My Serial" I feel that lsusb does not show all of the information. Is there a more informative tool?

    Read the article

  • Erlang Supervisor Strategy For Restarting Connections to Downed Hosts

    - by derdewey
    I'm using erlang as a bridge between services and I was wondering what advice people had for handling downed connections? I'm taking input from local files and piping them out to AMQP and it's conceivable that the AMQP broker could go down. For that case I would want to keep retrying to connect to the AMQP server but I don't want to peg the CPU with those connections attempts. My inclination is to put a sleep into the reboot of the AMQP code. Wouldn't that 'hack' essentially circumvent the purpose of failing quickly and letting erlang handle it? More generally, should the erlang supervisor behavior be used for handling downed connections?

    Read the article

  • Windows can see Ubuntu Server printer, but can't print to it

    - by Mike
    I have an old desktop that I'm trying to set up as a home backup/print server. Backup was trivial, but am having issues getting the printing to work. The printer is connected to the server running Ubuntu Server 9.10 (no gui). If I access the printer via http://hostname:631/printers/, I am able to print a test page, so I know the printer is working; however, I am having no luck from Windows. Windows can see the printer when browsed via \hostname\, but I am unable to connect. Windows says "Windows cannot connect to the printer" without indicating why. Any suggestions? From /etc/samba/smb.conf: [global] workgroup = WORKGROUP dns proxy = no security = user username map = /etc/samba/smbusers encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user load printers = yes printing = cups printcap name = cups [printers] comment = All Printers browseable = no path = /var/spool/samba writable = no printable = yes guest ok = yes read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = yes From /etc/cups/cupsd.conf: LogLevel warn SystemGroup lpadmin Port 631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseRemoteProtocols CUPS BrowseAddress @LOCAL BrowseLocalProtocols CUPS dnssd DefaultAuthType Basic <Location /> Order allow,deny Allow all </Location> <Location /admin> Order allow,deny Allow all </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow all </Location> <Policy default> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> <Policy authenticated> <Limit Create-Job Print-Job Print-URI> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Reading cookies across different hosts

    - by Thinker
    I have two sites - both are my projects. On site two, I need to check if the user is logged in on site one. I suppose to do this I should just create a script that puts a cookie into the body of an iframe and then read the iframe contents on site two. But I can't. Here is a code I made for testing purposes: http://jsbin.com/oqaza/edit I got an error, that says: "Permission denied for <http://jsbin.com to get property HTMLDocument.nodeType from <http://www.google.com."

    Read the article

  • Do most shared hosts handle gzipped files?

    - by Matrym
    I get them theoretically, but I'm grappling with gzipping files in practice. How should I go about gzip compressing my files, and what needs to be done in order to use them on a shared host? Would the following work? RewriteEngine On RewriteBase / RewriteCond %{HTTP:Accept-Encoding} .*gzip.* RewriteRule ^/(.*)\.js$ /$1.js.gz [L] RewriteRule ^/(.*)\.css$ /$1.css.gz [L] AddEncoding x-gzip text.gz

    Read the article

  • acts_as_ferret with multiple hosts

    - by Nick
    I've got everything working with ferret and acts_as_ferret for development (or localhost DRb), but I can't get my multiple host deployment working. All of the remote systems get ECONNREFUSED when accessing the port. On the ferret server, the daemon is listening on localhost only despite the configuration listing the FQDN as the host. I also tried switching to a UNIX socket to share data between the ferret DRb daemon and the app code but it too gets ECONNREFUSED. (The socket is available to all of the machines via an NFS mount). Is there a better way to do this or should I be looking for another search indexer? Thanks.

    Read the article

  • possible to use an IP derived from Dynamic DNS in htaccess IP allow/deny commands?

    - by user115745
    On a website I manage, I want to use an .htaccess file to allow access to a certain administrative directory only from my home IP address, which is dynamically assigned by my ISP and therefore changes -- not regularly, but it does happen. I also have an account from DynDNS and have one of the auto-update clients making sure it always points to my actual home IP address. I don't actually host anything at home; I just have set up the Dynamic DNS account. Is there any way to combine these features: that is, is it possible write the .htaccess allow/deny commands at my outside webhost in a way that my home IP address is not hard coded into the command, but instead is somehow derived from the Domain Name that the DynDNS has assigned me, by doing a real-time lookup every time the directory's .htaccess file is hit? Thank you.

    Read the article

  • MySQL - Why would SHOW SLAVE HOSTS cause a binlog dump?

    - by Rory McCann
    We're getting loads of binlog files in our MySQL 5.0.x. We have a normal master/slave replication thing going with 1 master, 1 slave. Looking at /var/log/mysql.log, nearly 90% of the time the replicator connects and does a SHOW SLAVE HOSTS causes a bin log dump. For example: 7020 Query SHOW SLAVE HOSTS 7020 Binlog Dump Log: 'mysql-bin.029634' Pos: 13273 However when I do a SHOW SLAVE HOSTS on the mysql myself, I get no results. Occasionally when the replicator does a SHOW SLAVE HOSTS, mysql will hang for hours. I see nothing in the /var/log/syslog at the same time... What's going on here? How can I debug this more? For the record the MySQL master and slave servers are ubuntu dapper.

    Read the article

  • What is proper relationship between /etc/hosts and DNS A records for a Linux server?

    - by MountainX
    I have an Ubuntu server. It is going to be a web server with a URI of www.example.com. I have a DNS A record pointing www.example.com to the server's IP address. Let's say I pick "trinity" as the hostname for this server. I want to set up the DNS records correctly. I need reverse DNS to www.example.com, so a CNAME for www.example.com doesn't seem appropriate. Here's my question: Is it considered best practice to set up two DNS records (which in my case would likely be two A records), one for www.example.com and one for trinity.example.com, both pointing to this server's IP address? (Or, even if it is not accepted as a best practice, is it a good idea?) If so, would the following be a proper /etc/hosts file? $ cat /etc/hosts 127.0.1.1 trinity.local trinity 99.100.101.102 trinity.example.com trinity www.example.com This server is a Linode and Linode's docs seem to imply that the above approach is best (if I am reading them correctly). Here's the relevant section. I bolded the line that seems to apply here. Update /etc/hosts Next, edit your /etc/hosts file to resemble the following example, replacing "plato" with your chosen hostname, "example.com" with your system's domain name, and "12.34.56.78" with your system's IP address. As with the hostname, the domain name part of your FQDN does not necesarily need to have any relationship to websites or other services hosted on the server (although it may if you wish). As an example, you might host "www.something.com" on your server, but the system's FQDN might be "mars.somethingelse.com." File:/etc/hosts 127.0.0.1 localhost.localdomain localhost 12.34.56.78 plato.example.com plato The value you assign as your system's FQDN should have an "A" record in DNS pointing to your Linode's IP address. For more information on configuring DNS, please see our guide on configuring DNS with the Linode Manager.

    Read the article

  • What could be the maximum number of hosts on a 100BaseTX ethernet network ?

    - by snowflake
    Hello, I'm having two ip networks (192.168.1.x and 192.168.2.x) bridged on a server, but all hosts (fixed IPs) are on the same physical 100BaseTX ethernet (with a daisy chain of 48ports switchs). Often, I loose link between the server and hosts of the second network. Usually if I reboot the host, the connection work again, and if I force connection to be active, the connection keep alive until I ping after a while without active connection. I'm wondering how many hosts can be connected on the same network without troubles, and eventually how many switches can be daisy chained ? I suppose maximum length between hosts to be 100meters to the hub or 200meters between extremity of the network. Somebody is suggesting 8 switches here : http://www.tek-tips.com/viewthread.cfm?qid=1137879&page=12 Any comments on how to find a solution to the problem and not answering directly to the question are welcomed!

    Read the article

  • Can I install applications to Remote Desktop Session Hosts via Group Policy?

    - by CC.
    I have a GPO that installs an application using the Software installation policy under Computer Configuration. I assign this GPO to the OU with our desktop/laptop computers, and my clients all install the software fine. I have another separate OU that covers our new Server 2012 RD session hosts. Previously, we've manually installed applications on our one Terminal Server. Now we have one Broker and two Session Hosts. I'd like to take my existing GPO, assign it to the session hosts, and have it install on the next reboot after a gpupdate so I'm sure that each is identically configured. Given this info: Should I be able to install applications via GPO to Session Hosts? Will Group Policy automatically install the applications as if I put the session host into /install mode, or do I need to do that?

    Read the article

  • substitution of someaddress.com on local desktop computer

    - by dev
    Here is VDS server with ip(for example 105.123.123.123) with working apache service. And there is a desktop computer with linux on board(but really I presume there is no difference). I need to type on web browser address like someaddress.com and to see website situated at my server. My /etc/hosts: 127.0.0.1 localhost 105.123.123.123 someaddress.com 105.123.123.123 www.someaddress.com But it doesn't work. I see real someaddress.com website. What can be wrong. It will be great if you help me with that. P.S. Why I need this. There is one project with fixed links(like someaddress.com/inf). And I need to test it.

    Read the article

  • apache on windows virtual directory config help

    - by sprugman
    I'm running Apache on Windows XP via Xampplite, and could use help configuring my virtual directory. Here's what I'm hoping to do on my dev box: I want my source files to live outside of the xampp htdocs dir on my local machine I can access the project at http://myproject others on my local network can access the project at my.ip.address/myproject keep localhost pointing to the xampp's htdocs folder so I can easily add other projects. I've got 1 & 2 working by editing the windows hosts file, and adding a virtual directory in xampp's apache\conf\extra\httpd-vhosts.conf file. I don't immediately see how to do 3 without messing up 4.

    Read the article

  • How do I deny access to everybody but me in Windows 7?

    - by GregH
    I am trying to set up a file server on my my Windows 7 Pro system at home. I set up one common "Share" folder that I have shared/published. Within the share folder I want to have individual folders for me and my wife...that is only I can read/write my folder and only my wife can read/write to her folder and neither of us can read the contents of the other person's folder. Then I want to have a "public" folder where we can both read/write to contents of the folder as well as any sub-folders created, but my "kids" account can only read from this folder and sub folders. It seems really confusing to set up something like this and it really shouldn't. I am really confused between the "allow", "deny", and dimmed check boxes in the security tab. It seems that if I "Deny" access to "Everyone" on my private folder, then I don't even have access to it. Windows security seems backwards from the rest of the world's security models. If I am in two groups and I deny access to one of the groups but allow access to the other group then Windows security denies me access as I am in one of the groups that has access disallowed. Very confusing.

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Why my Buffalo router keeps on sending rdp, netbios, ftp, http requests?

    - by user192702
    I have the following network setup: Buffalo Router (192.168.100.1) < Watchguard XTM21 (192.168.100.13) < PC For some reason I keep on seeing the following repeating on my XTM21's Traffic Monitor. While I have enabled Port Forwarding, none of the ports reported below were enabled. Can someone let me know why I'm seeing all of these? 2013-10-19 23:37:56 Deny 192.168.100.1 192.168.100.13 ftp/tcp 4013 21 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 282700472 win 5840" Traffic 2013-10-19 23:37:59 Deny 192.168.100.1 192.168.100.13 http/tcp 2459 80 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 296571237 win 5840" Traffic 2013-10-19 23:38:02 Deny 192.168.100.1 192.168.100.13 8000/tcp 3244 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 298709937 win 5840" Traffic 2013-10-19 23:38:05 Deny 192.168.100.1 192.168.100.13 8000/tcp 3244 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 298709937 win 5840" Traffic 2013-10-19 23:38:05 Deny 192.168.100.1 192.168.100.13 rdp/tcp 3896 3389 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 290482691 win 5840" Traffic 2013-10-19 23:38:08 Deny 192.168.100.1 192.168.100.13 netbios-ns/udp 2110 137 0-External Firebox Denied 78 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" Traffic 2013-10-19 23:38:32 Deny 192.168.100.1 192.168.100.13 ftp/tcp 4025 21 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 321868558 win 5840" Traffic 2013-10-19 23:38:35 Deny 192.168.100.1 192.168.100.13 http/tcp 2471 80 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 325918731 win 5840" Traffic 2013-10-19 23:38:38 Deny 192.168.100.1 192.168.100.13 8000/tcp 3256 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 327854525 win 5840" Traffic 2013-10-19 23:38:41 Deny 192.168.100.1 192.168.100.13 8000/tcp 3256 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 327854525 win 5840" Traffic 2013-10-19 23:38:41 Deny 192.168.100.1 192.168.100.13 rdp/tcp 3896 3389 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 327101423 win 5840" Traffic 2013-10-19 23:38:44 Deny 192.168.100.1 192.168.100.13 netbios-ns/udp 2110 137 0-External Firebox Denied 78 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" Traffic

    Read the article

  • Error with RewriteCond in .htaccess about '-f' option when it is not present.

    - by Tyler Crompton
    Whenever I look at my error logs this is what I see: RewriteCond: NoCase option for non-regex pattern '-f' is not supported and will be ignored. However, I am not using -f. I am still new to Apache stuff. This is what my .htaccess files looks like in the site's root directory: # Use PHP5 Single php.ini as default AddHandler application/x-httpd-php5s .php Options -Indexes SetEnv INCLUDES /home1/tylercro/public_html/includes/ SetEnv TZ America/Chicago ErrorDocument 400 /400/ ErrorDocument 401 /401/ ErrorDocument 403 /403/ ErrorDocument 404 /404/ ErrorDocument 500 /500/ order allow,deny deny from 69.28.58.33 deny from 95.24.184.87 deny from 95.108. deny from 119.63.196. deny from 123.125.71. deny from 216.92.127.133 deny from 204.236.225.207 allow from all RewriteEngine On # Take off a the end script name if it is an index page. RewriteCond %{REQUEST_URI} (.*)(index|default)\.\w{1,4}$ [NC] RewriteRule .* %1 [R=301] # Force "/" at end of URL if directory. RewriteRule (.*)!(\.\w{1,5}$) $1 [R=301]

    Read the article

  • How to block/redirect hosts and subdomains of a host using htaccess?

    - by Sven
    I want to block several host-domains and their subdomains, as well as IP-Adresses using htaccess. So far I added to my .htaccess-file: # block doamins and all subdomains Deny from .example.org # block domainrange: 1.2.3.[1-255] Deny from 1.2.3. # Block single IP Deny from 2.3.4.5 but still I had problems with spam from e.g. server1.example.org. What is wrong with my script? Is it also possible to redirect all requests from certain hosts/IPs to a document (say: info.html)?

    Read the article

  • Web Development Environment: How to distribute edited hosts files over bunch of mac machines?

    - by Alex Reds
    I am doing some research to prepare some web development environment for our small(10ppl and growing) new office. User Case: For each new web project usually we create new alias on an Apache server someproject.companywebsite From my understanding in order to see this website locally for all the rest of our team(including mangers and directors) they will need to edit hosts file (e.g. "192.168.1.10 someproject.companywebsite"), and like that each time for a new project(can be 2-5 each week) Solution: And I looking for a solution how to edit this hosts file only once and distribute it over all mac machines in our network at once or much more flawlessly than poking around with each machine every time over and over again. Is that possible? Or that a very wrong way of doing that? Perhaps we better set up own local dns server and point to it our router? Though own dns server a bit concerns me because of might be some network interruption and others lags, if you know what I mean. Or perhaps there are another workflows for that? What's the best way for such things? So I'll be so grateful to hear some advices from experienced admins. I couldn't find that info on internet, so if you know where to read about it, point me in a right direction. Thank you in advance Alex

    Read the article

  • How to link specific ports to specific domains with Apache virtual hosts?

    - by theJoe
    We have a forward-facing linux box running Apache HTTP server that is acting as a reverse proxy for several back-end servers. The servers are accessed through specific domain names and ports and are set up as virtual hosts within Apache as such: Listen 8001 Listen 8002 <Virtualhost *:8001> ServerName service.one.mycompany.com ProxyPass / http://internal.one.mycompany.com:8001/ ProxyPassReverse / http://internal.one.mycompany.com:8001/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> <Virtualhost *:8002> ServerName service.two.mycompany.com ProxyPass / http://internal.two.mycompany.com:8002/ ProxyPassReverse / http://internal.two.mycompany.com:8002/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> The proxy server has only one IP address, and both domains are pointing to it. Accessing internal.one via service.one works fine, as does accessing internal.two via service.two. Now the problem is that Apache does not take the requesting domain into account when accessing the virtual hosts. What I mean is that both domains work for both ports: requests for service.one:8002 proxies to internal.two:8002, and requests for service.two:8001 proxies to internal.one:8001, where ideally both these requests should be denied. I can get around this by creating more virtual hosts that explicitly deny these requests: NameVirtualHost *:8001 NameVirtualHost *:8002 <Virtualhost *:8001> ServerName service.two.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> <Virtualhost *:8002> ServerName service.one.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> But this is not an ideal solution, since we plan to add more services to the proxy, and each new port would need to be explicitly denied on all the other domains, and each new domain would need to be explicitly denied on all ports it is not utilizing. As we add more services, the number of virtual hosts can get out of hand quickly. My question, then, is whether there is a better way? Can we explicitly tie specific ports to specific domains in a virtual host so that only that domain-port combination is processed, and all other combinations are not? Things I’ve tried: Adding NameVirtualHost *:8001, etc. without the additional virtual hosts. Setting ProxyRequests On and Off, as well as ProxyPreserveHost On and Off Adding the server name or IP address to the virtual host header, e.g. <VirtualHost service.one.mycompany.com:8001> Using the <proxy> directive inside the virtual host directive. Lots and lots of googling. The proxy server is running CentOS 6.2 64-bit, Apache HTTPD server 2.2.15. As mentioned, the proxy server has only one IP address, and all the domains we are using are pointing to it.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >