Search Results

Search found 50945 results on 2038 pages for 'web testing'.

Page 433/2038 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • How Can I Map IIS on My PC with Static IP to my Domain Name?

    - by Subhen
    Hi, Now I have got an internet connection with Static Ip. I want to know How can I map my website to my Static IP(Received from ISP). I know this is not a good Idea for Security and Performance issues, But just Wanted to know as I can set up a test Project. Again, Can't I just by a domain name and map it to my Static Ip, instead of buying them from WebHosting Providers. Now I have bought the space from bizzhost and hosted my site by setting the Name Server. Thanks, Subhen

    Read the article

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • iptables: Allow only HTTP access for web browsing

    - by user1448260
    Have a linux box, want it locked down but just be able to surf internet on it. Why is this script blocking http too? #!/bin/sh # # iptables -F # #Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # # Allow TCP connections on tcp port 80 # iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # List rules # iptables -L -v

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • task manager for Internet usage, I need to block a software accessing a website/web server

    - by Pennf0lio
    I have a software that accesses a website, I want to monitor what website is it accessing and block that website. Is there a software similar to "windows task manager" that allows you to monitor software that accesses a website? I want to know what website/server is it accessing so I could then block it. And Is there an alternative way to block aside from "host" file? Thanks! FYI: running on Win7

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • SSH does not allow the use of a key with group readable permissions

    - by scjr
    I have a development git server that deploys to a live server when the live branch is pushed to. Every user has their own login and therefore the post-receive hook which does the live deployment is run under their own user. Because I don't want to have to maintain the users public keys as authorized keys on the remote live server I have made up a set of keys that 'belong's to the git system to add to remote live servers (In the post-receive hook I am using $GIT_SSH to set the private key with the -i option). My problem is that because of all the users might want to deploy to live, the git system's private key has to be at least group readable and SSH really doesn't like this. Here's a sample of the error: XXXX@XXXX /srv/git/identity % ssh -i id_rsa XXXXX@XXXXX @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0640 for 'id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: id_rsa I've looked around expecting to find something in the way of forcing ssh to just go through with the connection but I've found nothing but people blindly saying that you just shouldn't allow access to anything but a single user.

    Read the article

  • Can I associate my spare Elastic IP addresses to an Amazon EC2 instance started in an Autoscale group and Monitoring?

    - by undefined
    I want to know if I can reserve a number of Amazon Elastic IP addresses and assign them to instances started by Autoscale. So basically, when a new instance is started because a trigger has been triggered can I also set the API to look for a spare IP address and allocate it to the instance. I need to do this because the started instance will need to communicate to a server outside the cloud and get through a firewall which will only allow remote access from a predefined set of IP addresses. So i think i need to reserve some IPs, add them to my firewall settings then allocate them (automatically) when a new instance is started. Any ideas?

    Read the article

  • Fix Fatal Error Condition showing system path

    - by JMC
    I've noticed there are a large number of servers running Magento Commerce that will return a fatal error showing the system path: Fatal error: Uncaught exception 'Exception' with message 'File '/usr/local/www/magento/data1702/media/css' does not exists.' in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php:96 Stack trace: #0 /usr/local/www/magento/data1702/get.php(205): Varien_File_Transfer_Adapter_Http->send('/usr/local/www/...') #1 /usr/local/www/magento/data1702/get.php(165): sendFile('/usr/local/www/...') #2 {main} thrown in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php on line 96 Magento as an application is generally good about supressing error messages. How can a linux server running apache be configured to avoid returning this error message since the app has problems suppressing it.

    Read the article

  • Free, web based alternative to Visio?

    - by Lars
    I have used Visio to map out my network structure, and have used the export function to create an HTML page that is searchable by IP, hostname etc. This is a really nice tool and I use it often. However, I would like for users who do not use Internet Explorer to be able to use the search features. What are some alternatives to Visio here? I want to draw a network diagram where objects are searchable. Thanks!

    Read the article

  • SSRS report on SharePoint Web Part

    - by MicroSumol
    I have this configuration: DBK- SQL/SSRS/SSAS (includes SharePoint databases) SPK- SharePoint I created a SharePoint Site with an SSL certificate. Then on DBK I setup the SSRS with an SSL. Finaly went back to SharePoint and setup a webpart on a subsite to connect to the SSRS report. The problem comes that the user is asked 2 times to authenticate. Once when he logs into sharepoint, then when he wants to see the SSRS report. Since I am not an expert on SSRS, I am asking is there an easy way to pass the SharePoint credentials to the SSRS report. Would it be easier to install SSRS on SPK? Would that even work or solve my problem?

    Read the article

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • Does setting an A record for a root domain set it (automatically) for subdomains?

    - by Edan Maor
    I bought a domain from Dreamhost, but my servers are actually running on Amazon's AWS. I have an Elastic IP, say 1.1.1.1. In the Dreamhost panel, I've added an A record for my domain name, pointing it to 1.1.1.1. My question is, are all subdomains (e.g. www.mydomain.com, a.mydomain.com, etc.) automatically mapped to 1.1.1.1 as well, because the root is? Or do I have to add separate rules for each subdomain?

    Read the article

  • Mod_rewrite issue with godaddy web hosting

    - by MrFoh
    Am trying to use laravel to build a site but my routes all redirect to the homepage. Apache error logs show this AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. And the .htaccess file is this <IfModule mod_rewrite.c> Options -MultiViews Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> The webroot has multiple sub-folders which are document roots for different domains. Am working with one of these sub-folders. What is causing this error and how can it be fixed

    Read the article

  • Google Chrome no longer treats " Web Apps" specially

    - by Adrian Petrescu
    I'm running Google Chrome (Dev Channel), with the --enable-apps flag, in both OSX and Ubuntu. I have four or five WebApps installed, and they appear in the "New Tab" page just fine. The problem is that, before, when the feature first became available in the Dev Channel, the actual tabs hosting the webapps received special treatment; they would have 3D Dock-like look, and (more importantly) the tab bar would be hidden while using that tab. Sometime in the last few weeks, however, it seems that the special treatment just disappeared with one of the daily updates. The webapps still show up in the New Tab page, they still work in the sense that they capture all URLs going to that webapp, and they use the right icons; but they've basically become indistinguishable from just a regularly pinned tab. The two special features mentioned above have disappeared, on both Ubuntu and OS X. My questions are simply: a) Does this happen to anyone else? When exactly did it begin? b) Why did Google regress the feature? c) Is there any flag I can enable to get it back?

    Read the article

  • Web server replica not working in other server

    - by user761076
    I have a Drupal installation (php+mysql) in a server, and I'm trying to copy this installation to another server with the same configuration, same physical and virtual path, same db configuration, etc. The thing is, in my new server I get the homepage to work, but not the inner pages, so I guess has something to do with rewrite (mod_rewrite is installed) (both .htaccess are the same). When I access http://localhost/myweb/content/mypage I get a 404 or a "Forbidden" if I uncomment this in httpd.conf (original httpd.conf does not have this entry): <Directory path/to/docs"> DirectoryIndex index.php index.html Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any clue? Thank you

    Read the article

  • Which server requirment for a Redmine, Git and website hosting?

    - by Ephismen
    Me and 9 other students are going to start a project that will last a minimum of 2 years, for this purpose we are looking to host all our work on a server. Here are a few tools we would like to work with: Redmine GIT Hosting a website/blog to show our work Hosting an internal and private development website/blog We haven't decided yet which OS we will install, but we were looking toward Ubuntu or Fedora. Having a limited budget, 300$/year, we would like to have some advices on the following dedicated server specifications: Kimsufi 2G: Hardware: Intel Celeron/Atom, 1.20 Ghz, 64 bits, 2Gb DDR2, HDD 1 To, Backup FTP 100Gb Network: Connection 100 Mbps, Illimited trafic Dedibox SC: Hardware: Dell Nano U2250, 1x 1,6GHz, 64 bits, 2Gb DDR2, HDD 160 Gb Network: Connection 1Gbit/sec, Illimited trafic Will these server be sufficient? Should we host the websites on another platform? Would a virtualized server be more appropriate? Thank you for your answers, Ephismen.

    Read the article

  • AWS RDS Mysql with benstalk Hibernate app: Character encoding issue

    - by TeraTon
    I'm running a webapp from amazon rds with tomcat 7 and spring, which uses hibernate as the persistence layer. The application and utf-8 encoding work properly on localhost, but for some reason when I deploy to amazon, the UTF-8 encoding breaks. I use mysql 5.5.27 on amazon rds and the table that we wish to update has collation set to utf8 - utf8_unicode_ci And in hibernate I have set: < prop key="hibernate.connection.charSet"UTF-8 UTF-8 characters get replaced by ??? and this is of course especially bad for passwords and usernames + email as it basically kills them. Anyone else encountered character encoding breaking when deploying to amazon?

    Read the article

  • Help me hosting Apache server on a LAN connected computers?

    - by akhilesh
    i have build a JSP project,but i really want that other computer connected to my computer can access by website.i have never done it before Please help me.Now my server can be accessed using http://localhost:8080 on my local machine.what are the configuration i need to do please tell me.please post some link or step by step help.

    Read the article

  • Beast / CRIME / Beach attack and stopping it

    - by user2143356
    I have read so much on all this but not entirely sure I understand what has gone on. Also, is this one, two or three problems? It looks to me like three, but it's all very confusing: Beast CRIME Beach It seems the solution may be to simply not use compression with HTTPS traffic (or is that just on one of them?) I use GZIP compression. Is that okay, or is that part of the problem? I also use Ubuntu 12.04 LTS Also, is non-HTTPS traffic okay? So after reading all the theory I just want the solution. I think this may be the solution, but can someone please confirm I have understood everything so I am not likely to suffer from this attack: SOLUTION: Use GZIP compression on HTTP traffic, but don't use any compression on HTTPS traffic

    Read the article

  • Need advice on how to set up live video streaming to web/mobile devices

    - by jasondewitt
    I have a bunch of live udp video streams that currently are viewed by set top boxes in my network. I would like to pick this video up (I can do this with vlc now) and stream it out to other non-STB endpoints (webpage or a phone/tablet of some sort). Right now I am able to pick up the udp stream with vlc and convert it to an http stream on port 8080 of my vlc box. Then I can use the vlc client to pick up and watch that video stream. This is where I'm not sure where to go with it. I really doubt I would want everyone who is watching the video to make a connection back to my vlc server that is doing the encoding, so how do I distribute this live video to the people who want to see it?

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >