Search Results

Search found 32252 results on 1291 pages for 'software services'.

Page 250/1291 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • PHP cannot connect to AWS RDS server

    - by Eugene Lim
    I have a EC2 Instance with apchane, PHP and phpmyadmin. I have connect phpmyadmin to manage the aws RDS server. they are in the same security group. But when i try to use a php script to connect to the AWS rds server, it gave me SQLSTATE[HY000] [2003] Can't connect to MySQL server on (xxx.xxx.xxx.xxx). I did some researched, and most of them says use setsebool -P httpd_can_network_connect=1 reference: http://www.filonov.com/2009/08/07/sqlstatehy000-2003-cant-connect-to-mysql-server-on-xxx-xxx-xxx-xxx-13/ But i have no idea which server to configure? and how to?

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • Uninstaller that can work with failed installs

    - by Nifle
    I'd like an application that can uninstall applications that does not show up in the normal add/remove applications. I have an application that failed to install properly. It installs half way and then crashes leaving short cuts, folders with exe and config files in and some stuff in the registry. Ideally I would like to be able to point the uninstaller to the install dir and tell it to remove everything that references files in that dir.

    Read the article

  • Passenger connection reset by peer issue

    - by user887372
    I am new to ruby on rails. I am using passenger 3.0.17 to deploy my ruby 3.2.6 project. My project is working fine but i got 500 internal error when i try to upload files on server. I checked my passenger log and found: [ pid=20654 thr=140394143790848 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:57.82 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) 2012/11/01 09:29:27 [crit] 20691#0: *431 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/2" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/jquery.js?body=1 HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" 2012/11/01 09:29:33 [crit] 20691#0: *435 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/3" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/background.png HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" [ pid=20654 thr=140394115462912 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:33.543 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) Please guide me regarding the issue. I am unable to find the reason of this peer reset and failied mkdir(). Thanks in advance

    Read the article

  • What will happen with my RAID5 after motherboard change?

    - by abatishchev
    Currently I have ASUS P5Q-EM and 3 HDD in RAID5 using it's on-board RAID controller Intel ICH10R. I want to bye a new motherboard, for example, Gigabyte GA-EQ45M-S2 which also have on-board RAID controller, but Intel ICH10DO. What will happen with my data on RAID5? Will I have to re-create the array from the scratch and lost all my data? Is such array a soft RAID or soft-hard? What if my current motherboard will broken? What will happen with my data?

    Read the article

  • Terminal Server 2008: Remote App Issue

    - by JohnyD
    I have a FoxPro 2.6 (16-bit) application that I've installed on a Win2008 (32-bit) Terminal Server. I then created a Remote App from it. It works fine. The problem is that within this FoxPro application it calls out to a .Net application. I have the proper .Net Framework installed on the server (2.0) and I have run the code access security policy tool (caspol.exe). However, when I launch the .Net app from within the FoxPro application I get the following error: Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: vector.exe Problem Signature 02: 1.0.0.3 Problem Signature 03: 48b579f2 Problem Signature 04: vector Problem Signature 05: 1.0.0.3 Problem Signature 06: 48b579f2 Problem Signature 07: f Problem Signature 08: 57 Problem Signature 09: System.Security.Security OS Version: 6.0.6001.2.1.0.18.10 Locale ID: 1033 Vector.exe is our .Net application. In fact, it's an in-between application that checks to ensure you have the latest version. When it's done it calls out to another .Net executable. Does anyone believe this should be a problem? Thanks in advance.

    Read the article

  • Reducing latency for different geographic regions on Amazon Cloud

    - by Shoaibi
    I have got an application which has three components Application code : Amazon EC2 US-EAST-1 instance Application images, and other static data : Amazon S3 with CloudFront Application Database : Amazon RDS In short i need something like Cloud Front for EC2. In long, people using this application from a different region say middle east will have faster static content downloading due to Cloud Front but there would be a lot of latency in communicating to EC2 instance. I want to use a budget friendly way of enhancing this. Launching Amazon Instances in every region that offer is sure a choice, but isn't really cheap, so would try to avoid it unless its last resort. Also say if my clients also need to communicate to the RDS database directly, is there some kind of solution which gives that kind of functionality mentioned above, but for RDS?

    Read the article

  • AWS SSL Load Balancer

    - by Jay Francis
    OK, I am looking for some pointers. Basically I have a white-label app/site that will allow users to setup their own domain to use for their customer front-end. We have 2 dedicated servers and a load balancer. The problem is SSL, we were thinking about using AWS ELB to handle the SSL loadbalancing, but cant seem to figure out if it will properly handle it, it seems to be setup to work with EC2 instances, but we are using externally hosted servers via a loadbalancer. A blog post by AWS looks similar to what we need but it only seems to work with EC2 instances. http://aws.typepad.com/aws/2011/08/elastic-load-balancer-ssl-support-options.html Anyone had experience setting ELS SSL load balancers up to work with external servers?

    Read the article

  • Hosted version of Yahoo! answers [closed]

    - by Neil
    Hi all. Does anyone know of a hosted version of Yahoo! Answers (or Stackoverflow/Superuser) that I could integrate with my site? I know that there are some open-source implementations (see http://meta.stackoverflow.com/questions/2267/stack-overflow-clones?page=1&tab=votes#tab-top) but I'd rather have some hosted if possible. I know there is http://stackexchange.com/ as well, but I really want some tight integration with the rest of my site. Failing that, has anyone got any experience with the open source versions? Some of them look a little, erm, unfinished... Thanks. Neil.

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • CentOS - mdadm raid1 drive won't mount to default location

    - by danny
    I'm running CentOS 5.5, the system, boot, swap, etc. is all on /dev/sda and I have two identical single-partition drives /dev/sdb1 /dev/sdc1 that are configured in RAID1 (using mdadm). It was working fine (configured to mount to /mnt/data in the fstab file) and I recently let yum install a couple of automatic updates without paying attention to what they were, and now it doesn't work. Raid is working fine (dmesg shows it gets loaded correctly). mdstat shows: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] XXXX blocks [2/2] [UU] unused devices: <none> Additionally, I can mount it anywhere other than its default directory (i.e. the following works, and I can read data off the drives). # mount /dev/md0 /mnt/data2 EXT3-fs warning: mounting fs with errors, running e2fsck is recommended But when I run the following I get: # mount -a mount: /dev/sdb1 already mounted or /mnt/data busy It says nothing is mounted when I try to umount /dev/sdb1 or umount /mnt/data, so I assume it's the second of those errors. However, lsof | grep mnt shows nothing. The weird thing is that I can save files in /mnt/data. So something is obviously mounted there, but when I try to umount it I get the error that nothing is mounted. /etc/mtab doesn't mention any of the partitions or files I am trying to work with, and fstab just has that one line I mentioned above that is supposed to mount my raid partition. Again, it was all working fine until I On Google I've found a few things about dmraid interfering with mdadm after an update, but I yum remove'd dmraid and rebooted and it didn't help. I'm really confused and need to get this working to get on with my work!

    Read the article

  • Openfiler RAID 10 option not found

    - by chrisling106
    Hi, I'm building a NAS using Openfiler2.3 (from 32-bit ISO), first of all I want to experiment it on VM first before going out and buy the harddrives needed. I created 5 virtual drives on VMware, sda is 2GB and the rest 1GB each (sdb to sde). I left sda blank and want to setup a RAID 10 disk using sdb, sdc, sdd and sde, 4 RAID partitions are setup successfully, but when I try to create a RAID device the only option for RAID level is 1, 0, 5 and 6. RAID 10 is not there! Can someone let me know what have I missed, please? TIA.

    Read the article

  • Why do I get a DegradedArray event with mdadm

    - by azera
    Hello Just so we're clear on what's happening: I bought 4 new sata 2 drives, with the intent of using them in a raid5 all drive are fully recognised by both my bios and my linux box (gentoo) I created a raid5 array, fiddled a bit with it to understand how it works, how to monitor ect At some point, this triggered a degradedarray event, even though the array is brand new. I tried to stopping the array and recreating a new array with the same drive but the new array starts degraded too. here is what I used to create it mdadm --create -l5 -n4 /dev/md/md0-r5 /dev/sdb /dev/sdd /dev/sde /dev/sdf here are the output from my /proc/mdstat and mdadm --detail --scan **mdstat** Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid5 sdf[4] sde[2] sdd[1] sdb[0] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_] [>....................] recovery = 2.8% (41689732/1465138496) finish=890.3min speed=26645K/sec unused devices: <none> **detail** ARRAY /dev/md/md0-r5 metadata=0.90 spares=1 UUID=453e2833:81f22a74:64188b84:66721085 As such I have a couple questions: does a raid5 array always start in degraded mode at first ? why does sdf have the number 4 between bracket instead of 3, why does it see a spare disk and why is the 4th drive marked with _ instead of U ? (bad configuration ?) How can I recreate the array from scratch, do i have to format each drive on its own before recreating it ? Thanks for any help, I'm not sure about what I should do at the moment

    Read the article

  • Vagrant-aws not provisioning

    - by SuperCabbage
    I'm trying to spin up and provision an EC2 instance with Vagrant, it successfully creates the instance up and I can then use vagrant ssh to SSH into the it but Puppet doesn't seem to carry out any provisioning. Upon running vagrant up --provider=aws --provision I get the following output Bringing machine 'default' up with 'aws' provider... WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1 [default] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [default] Launching an instance with the following settings... [default] -- Type: m1.small [default] -- AMI: ami-a73264ce [default] -- Region: us-east-1 [default] -- Keypair: banderton [default] -- Block Device Mapping: [] [default] -- Terminate On Shutdown: false [default] Waiting for SSH to become available... [default] Machine is booted and ready for use! [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/ => /vagrant [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/manifests/ => /tmp/vagrant-puppet/manifests [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/modules/ => /tmp/vagrant-puppet/modules-0 [default] Running provisioner: puppet... An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'default' machine. Please handle this error then try again: No error message I can then SSH into the instance by using vagrant ssh but none of my provisioning has taken place, so I'm assuming that errors have occured but I'm not being given any useful information relating to them. My Vagrantfile is as following; Vagrant.configure("2") do |config| config.vm.box = "ubuntu_aws" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" config.vm.provider :aws do |aws, override| aws.access_key_id = "REDACTED" aws.secret_access_key = "REDACTED" aws.keypair_name = "banderton" override.ssh.private_key_path = "~/.ssh/banderton.pem" override.ssh.username = "ubuntu" aws.ami = "ami-a73264ce" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end My Puppet manifest is as following; package { [ 'build-essential', 'vim', 'curl', 'git-core', 'nano', 'freetds-bin' ]: ensure => 'installed', } None of the packages are installed.

    Read the article

  • How to allow IAM users to setup their own virtual MFA devices

    - by Ali
    I want to let my IAM users to setup their own MFA devices, through the console, is there a single policy that I can use to achieve this? So far I can achieve this through a number of IAM policies, letting them list all mfa devices and list users (so that they can find themselves in the IAM console and ... I am basically looking for a more straight forward way of controlling this. I should add that my IAM users are trusted users, so I don't have to (although it will be quite nice) lock them down to the minimum possible, so if they can see a list of all users that is ok.

    Read the article

  • How do I host multiple SSL websites on a single EC2 instance using Amazon Elastic Load Balancers?

    - by Developr
    If I have 3 separate websites which all require SSL (separate certificates) that I want to host on the same EC2 instance(s) across multiple availability zones so that we have the ability to scale and be highly available, how do I achieve this using ELBs in my Amazon VPC? Each site requires a separate IP address, so I have added multiple private IPs to the EC2 instance, but I am unsure how to bind the ELB to a certain IP on the instance. I was also able to setup multiple ELB pointing to the same instance, but again, I am not seeing any way to bind each ELB to a separate IP on the instance. If this is not possible, what is the best option? Run each site on a separate EC2 instance / ELB combo (expensive and harder to maintain) Give each site a separate public IP and use Route 53 to do the load balancing (seems like a hack) Use a different load balancer option such as HAProxy that should be able to work like a normal load balancer appliance. Please help!

    Read the article

  • Remote assistance from Remote Desktop sessions: unable to control

    - by syneticon-dj
    Since Remote Control (aka Session Shadowing) is gone for good in Server 2012 Remote Desktop Session hosts, I am looking for a replacement to support users in a cross-domain environment. Since Remote Assistance is supposed to work for Remote Desktop Sessions as well, I tried leveraging that for support purposes by enabling unsolicited remote assistance for all Remote Desktop Session Hosts via Group Policy. All seems to be working well except that the "expert" seems to be unable to actually excercise any mouse or keyboard control when the remote assistance session has been initiated from a Remote Desktop session itself. Mouse clicks and keyboard strokes from the "expert" session (Server 2012) seem to simply be ignored even after the assisted user has acknowledged the request for control. I would like to see this working through RD sessions for the support staff due to a number of reasons: not every support agent would have the appropriate client system version to support users on a specific terminal server (e.g. an agent might have a Windows Vista or Windows 7 station and thus be unable to offer assistance to users on Server 2012 RDSHs) a support agent would not necessarily have a station which is a member of the specific destination domain (mainly due to the reason that more than a single domain's users are supported) what am I missing?

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • How to use private DNS to map private IP with "non registred" domain name

    - by PapelPincel
    I would like to use a private DNS (Route53 in our case) in order to map hosts to EC2 instance private IP addresse. The hosted zone we are using for testing is not declared in any registrar (company-test.com.). There are different servers (Nagios, Puppet, ActiveMQ ...) all hosted in ec2, that means their IP can change over time (restart, new instance launch...). That would be great if I can use DNS instead of clients' /etc/hosts for mapping private IP/internal domain name... The ActiveMQ server url is activemq.company-test.com and it maps to (A record) private IP address of the AMQ server. This url is only reachable by other ec2 owned by the same aws account. My question is how to configure ec2 instances so they could reach the ActiveMQ server WITHOUT having to buy a new domain company-test.com ?

    Read the article

  • App or apps like NetLimiter Monitor for Linux

    - by Nathaniel
    I find NetLimiter Monitor quite handy on Windows both for the realtime per-process bandwidth monitoring and the recorded data usage. What tool or tools can I use to replicate this functionality in Linux? I am okay using two separate apps for the bandwidth and the usage monitoring, but the usage monitoring is a must have.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >