Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 454/817 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Unable to ping gateway via bridge nic

    - by Ara
    I'm trying to install KVM on Ubuntu 12.04 server. We have multiple nic on this server of which we primarily use eth0. The server network runs fine(i'm able to ping gateway, ping dns server and ping servers on internet) with eth0 /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.22.194 netmask 255.255.255.0 network 192.168.22.0 broadcast 192.168.22.255 gateway 192.168.22.1 dns-nameservers 10.71.130.58 10.71.130.60 dns-search test.local I installed bridge-utils and configured br0 as below /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.168.22.194 netmask 255.255.255.0 network 192.168.22.0 broadcast 192.168.22.255 gateway 192.168.22.1 dns-nameservers 10.71.130.58 10.71.130.60 dns-search test.local bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off Post which i'm able to ping servers on the same ip range 192.168.22.2-254 except for 192.168.22.1 (which is the gateway) also i'm not able to ping any other servers. I'm not able to ping this machine from network. The output for route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.22.1 0.0.0.0 UG 100 0 0 br0 192.168.22.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 I've been struggling with this issue for past 5 days, would be of help if anyone can point me in the right direction to fix this issue. Thanks in advance

    Read the article

  • Why am I getting messages from cloudfront in my error log?

    - by JK01
    I frequently have messages like this in my websites error log: "Script error.". URL: https://e3m4drct5m1ays.cloudfront.net/items/loaders /loader_21.js?pid=21&systemid=13504281c5a501837196c23300f84e66&aoi=1327214632& zoneid=16620&cid=HK&rid=Hong%20Kong%20(general)&ccid=Kowloon&dma=0. Line number: 0 Error name: Stack: Now I don't actually know what cloudfront is or what it does. And I do not refer to this script in my site. So why would I be getting js error logged as if it was a script being run on my own site? This is using elmah logging.

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

  • Google Bot trying to access my web app's sitemap

    - by geekrutherford
    Interesting find today...   I was perusing the event log on our web server today for any unexpected ASP.NET exceptions/errors. Found the following:   Exception information: Exception type: HttpException Exception message: Path '/builder/builder.sitemap' is forbidden. Request information: Request URL: https://www.bondwave.com:443/builder/builder.sitemap Request path: /builder/builder.sitemap User host address: 66.249.71.247 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE   At first I thought this was maybe an attempt by a hacker to mess with the sitemap. Using a handy web site (www.network-tools.com) I did a lookup on the IP address and found it was a Google bot trying to crawl the application. In this case, I would expect an exception or 403 since the site requires authentication anyway.

    Read the article

  • virtualbox instances dedicated-server with custom dnsmasq

    - by ovanes
    I have dedicated server where I planned to run virtualbox virtual machines. Since the VMs are managed with vagrant/chef I may end up with many different ones. I thought it would be a great idea to deploy a dnsmasq on the server, which is going to dynamically assign the ip addresses to the VMs. Since each Vagrant/Chef recipe is configured to set the VM's host name I can find/reference the appropriate VM by the host name. Finally, the entire infrastructure is not directly accessible via internet, so the dedicated Server is the OpenVPN host. So the entire infrastructure may be seen as: +-------------------------------------+ | Dedicated Server | | | | +-------------+ +------------+ | +------------------+ | | DNSMasq | | OpenVPN |<==========>| Client | | +-------------+ +------------+ | | | | ^ ^ | +------------------+ | | | | | +--+ | | | | +-------+ | | | | VM1 | | | | +-------+ | | | ... | | | +-------+ | | +-| VM2 | | | +-------+ | +-------------------------------------+ Now some questions which I am struggling with: Are there any other suggestions to access private infrastructure, because I don't want to reinvent the wheel. On the Dedicated Server I don't see the vboxnet0 interface but VirtualBox is installed without GUI. Accessing of virtual boxes via ssh works fine. Did I miss smth? DNSMasq must serve the local VMs only, otherwise there is a chance that local DNSMasq start to serve other server's on the network, what I don't want. Because I don't see vboxnet0 I tend to use no-dhcp-interface=eth0 config option. Are there any thoughts on that despite, the fact that a second NW-card (which is not the case), might start serving DHCP-Requests? How should I config the VM's network interface that I am able to access it via OpenVPN and resolve the hostnames using the DNSMasq. I think it should be the host-only network card. Should I do bridging in the OpenVPN config or is it sufficient to use routing.

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • .htaccess language redirects with seo-friendly urls

    - by jlmmns
    How do I setup my .htaccess file to detect several languages, and redirect them to specific seo-friendly urls? Basically every url needs to go to index.php?lang=(...) So, for English language detection http://mysite.com has to go to http://mysite.com/en/ (index.php?lang=en) my .htaccess as of now (not working): RewriteEngine On RewriteCond %{HTTP:HOST} http://mysite.com/ RewriteCond %{HTTP:Accept-Language} ^en [NC] RewriteRule ^$ http://mysite.com/en/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^de [NC] RewriteRule ^$ http://mysite.com/de/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^nl [NC] RewriteRule ^$ http://mysite.com/nl/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^fr [NC] RewriteRule ^$ http://mysite.com/fr/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^es [NC] RewriteRule ^$ http://mysite.com/es/ [L,R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(en|de|nl|fr|es)$ index.php?lang=$1 [L,QSA]

    Read the article

  • GA and Unique visitors again

    - by DDEX
    I take care of a company intranet and measure the traffic with GA. I am absolutely sure that there are no more than 5000 URLs in our company and it is impossible to check the intranet from outside the company network. Yet when I check the number of Unique Visitors (UV) in the last year GA says there were 36.500 of them...How is that possible? I thought UV should measure each URL only once in the given time period. Could anybody explain how this actually works? Can it be that the cookie trackers expire after some time and are counted more then once?

    Read the article

  • When Googlebot sees a link, will it click it or navigate to it?

    - by FakeRainBrigand
    My site uses pushState and JSON data to display content. So, for example, this might appear on my page: <a href="/some/page">some page</a> The JavaScript then prevents the default action (following the link), and instead renders a view (using a different api, such as /getjson?some_page). $('[href]').click(function(){ history.pushState(...); handleURL(...); }); Assume my server will respond to requests at /some/page with a pre-rendered version. My questions are: will Googlebot receive the prerendered version, or allow JavaScript to instead invoke pushState, etc. if it doesn't make the direct request, will it wait for AJAX content to be loaded? does Googlebot implement pushState, so it will show the proper URL in search results?

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • '/'var/www/' vs '/home/$USER/public_html'

    - by OrganizedFellow
    I recently started using Ubuntu as a LAMP server. I've come across plenty of tutorials that say to place the files at '/var/www/' and I've also seen others that put them in '/home/$USER/public_html/'. During my testing and figuring stuff out, I was successfully able to view a test site URL from each location. Is one better than the other? I thought that maybe it was just preference. But the more I think about it, the more I want to keep all my work in my Home folder.

    Read the article

  • HTTP 303 redirection and robots.txt

    - by Ian Dickinson
    On a site I'm working on, we're using the HTTP 303 redirect pattern (see this article for background) to distinguish between information and non-information resources. So: some URL's under /id get redirected to dynamically-created pages under /doc. These dynamic pages are built from a database, and contain links to other /doc/ resources, so in general we don't want them to be crawled. Our robots.txt contains: Disallow: /doc However, we do want the non-redirected pages under /id to get indexed by Google et al: Allow: /id So the question I have, which I can't find an answer to so far, is: if an allowed /id page 303-redirects to a /doc page, will it still be blocked by robots.txt? If yes, we're OK, but otherwise I'm going to disallow all /id resources in the robots file, as having the crawler hammer the db would be worse than losing search indexing for the /id pages.

    Read the article

  • WWNs,WWPNs and Fibre Channel addresses

    - by user238230
    Lots of contradictory on these subjects and I don't know why. My first question is about the 64 bit WWN. One reference claims the terms WWN and WWPN are synonymous. An online source seems to refute this. They say: A WWPN (world wide port name) is the unique identifier for a fibre channel port where a WWN (world wide name) the unique identifier for the node itself. A good example is a dual port HBA. There will be two WWPN's (one for each port) and only a single WWN for the card itself. Question #1: Which is correct? I’m almost positive I read that every “Port” has a WWN. My next question is about the 24 bit FC address that is dynamically allocated to a port when it is introduced to the switch. The Domain ID field is defined as: "a unique number provided to each switch in the fabric." Question #2: Do Domain IDs only apply to switch ports? For example what would the Domain ID be for a HBA? None? The same as the switch port it is connected to? Question #3: My last question is about the Name Server of a switch. A book example shows the routing of a message through the switch. It uses the WWNs of the source and destination ports to route the message. I am assuming that the Name Server must associate the WWN and the FC address in some way in order to route the message, correct?

    Read the article

  • Website with sections in Drupal?

    - by Matt Hampel
    What is the best way to create a website with sections in Drupal? Users need to be able to add, remove, and nest pages fairly easily. Pages added to a section should have an appropriate URL, like "/[section name]/[page title]". This seems like a straightforward task, but I can't find the right combination of tools to do it. Subsite comes close, but for some odd reason, doesn't set up the correct content paths. The closest I got was creating a book for each subsection, but that feels like I'm using the wrong tool for the job. Edited with my solution: I used organic groups with pathauto. I set pathauto so that pages in groups had URLs that were of the form [group path]/[page title].

    Read the article

  • 302 Redirect causes garbage at end of Wordpress link in Facebook

    - by Joao
    When I try to link my Wordpress blog to Facebook, the url doesn't resolve properly. There's garbage appended at the end and Facebook is not able to retrieve information from the site. Happens in every page, post or main entry. Here's what happens: http://clarissarezende.com.br/ shows up in Facebook as http://clarissarezende.com.br/UPLcS/ (when copy/paste the link) and no information about the site shows up in FB. I'm using Wordpress 3.3.1 with ProPhoto 4. Recently I moved the DNS entry on my ISP. The blog is hosted at clarissarezende.com.br/public_html/blog2 and before the DNS would point to public_html and then I changed it to public_html/blog2. Note that I did not move any Wordpress files. Made the (I think) necessary changes all over Facebook, but still no dice... Any ideas on what can be happening?

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • Java update alert: issue with EAS 11.1.2.3

    - by inowodwo
    (in via Nancy) Customers using EPM 11.1.2.3 and a web browser to launch the Essbase Administration Services Console will lose the ability to launch EAS Console via the Web URL if they apply Java 1.7 build 45. Development is currently investigating this issue. Workaround: If Java 1.7 Update 45 has been installed, it will need to be uninstalled and a previous version will need to be installed. Older versions of Java are available in the Java Archive Note: Though it may work, Java 1.7 is not supported in previous versions of EAS. Customers running a version of EAS Console prior to 11.1.2.3 need to install the supported version of JRE. Follow this in the Community

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Preventing DDOS/SYN attacks (as far as possible)

    - by Godius
    Recently my CENTOS machine has been under many attacks. I run MRTG and the TCP connections graph shoots up like crazy when an attack is going on. It results in the machine becoming inaccessible. My MRTG graph: mrtg graph This is my current /etc/sysctl.conf config # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 1 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_max_syn_backlog = 1280 Futher more in my Iptables file (/etc/sysconfig/iptables ) I only have this setup # Generated by iptables-save v1.3.5 on Mon Feb 14 07:07:31 2011 *filter :INPUT ACCEPT [1139630:287215872] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1222418:555508541] Together with the settings above, there are about 800 IP's blocked via the iptables file by lines like: -A INPUT -s 82.77.119.47 -j DROP These have all been added by my hoster, when Ive emailed them in the past about attacks. Im no expert, but im not sure if this is ideal. My question is, what are some good things to add to the iptables file and possibly other files which would make it harder for the attackers to attack my machine without closing out any non-attacking users. Thanks in advance!

    Read the article

  • Why does my DD-WRT not accept SSH connections from my laptop?

    - by Vlad Seghete
    So, here is my system: I have a 2Wire AT&T modem/router which I use for wireless and a Buffalo router flashed with DD-WRT which is physically attached to the 2Wire and set in the DMZ. I set everything up on the DD-WRT to be able to connect to it using ssh and also so that it forwards ssh requests on a different port to one of the servers behind it. Now, when I am physically connected to the DD-WRT all this works great and as I would want it to. I ssh into the two different ports using the WAN IP of my network, and I get where I expect to land. If, however, I am connected using wi-fi to the 2Wire, the same commands do not work. I do not get an error, simply a timeout. I have trouble understanding this, since the DD-WRT is set in the DMZ and everything should pass to it. To further complicate the problem, I tried connecting to the same IP using my phone (wireless disabled, so really from the WAN) and surprise, it works! If I go back on the local network by enabling the wifi, the ssh connection times out. To make this even stranger, my WAN IP address always responds to pings (meaning in all the above situations). What could be going on here? I know what I should do, completely disable the 2wire as a router and use it strictly as a modem and them use all the routing capabilities of the dd-wrt. It's what I will probably end up doing anyway, but my question remains, because I really want to know what is happening here.

    Read the article

  • Broken links in content reports when tracking subdomains with Google Analytics

    - by Rob Sobers
    I have a tracking code that I use on my main site and my blog, which is on a subdomain: www.example.com blog.example.com I have a single profile in Google Analytics. I use advanced segments to look at traffic to the main site vs. traffic to the blog. Problem 1: When I'm browsing my content reports under Standard Reporting, the "Page" column doesn't show the top-level or sub-domain, so I can't differentiate www.example.com/index.html from blog.example.com/index.html easily. According to the docs, this filter is supposed to make GA prepend the hostname to the page URL in your content reports, but it doesn't seem to work. Problem 2: When I click on the little "Open in new window" icon next to a given page in a content report line, it always assumes the page lives on www.example.com, so I get 404s when the page is actually on blog.example.com. Is there a good solution for these subdomain tracking problems?

    Read the article

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • Removing AppPrincipals from Office365

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information So here is an annoying issue. If I have your AppPrincipal and secret, I can party as you! But as we go through our usual dev cycles, we create these ApplicationIDs. Hell Visual Studio will create them for us, to make things easy!The problem is, many a developer, and some a ITOgre, may leave these AppPrincipalIds sitting there and not clean them up when they are done playing. You can look for currently registered App Principals at https://yourtenant/_layouts/15/appprincipals.aspx The problem is, that URL shows you App Principals registered AND currently in use. Currently NOT in use App Principals are NOT shown on that page. The same issue applies on premises also, even though here I am talking specifically about Office 365. Getting rid of these in On-Prem is easy, just use the Object model (server side). Read full article ....

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >