Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 545/883 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • 550 relay not permitted

    - by Nick Swan
    Hi, we are using Fogbugz on our server to do customer support emails. Occasionally though we get errors coming back when sending emails of: 550 relay not permitted This seems to happen at random though, sometimes sending an email to a person works, next time to the same person it'll bounce back. I've tried setting up reverse DNS with the server host and creating the SPF record in GoDaddy but we still get some of these errors. Is there anything else I can do, and is there a way of testing these are actually configured correctly? Many thanks, Nick

    Read the article

  • Monitoring bandwidth/latency/jitter between 2 sites?

    - by TheCleaner
    I have 2 sites connected via an MPLS network and I'd like to do the following: setup a host on each end that can "talk" back and forth between each other and somehow report/log what kind of throughput, jitter, latency, etc. they are experiencing between each other in 5 minute intervals. Something similar to Qcheck but that can be automated. Bottom line is I'm trying to determine if the WAN network is "stable" throughout the day or if something is wrong. We have video conferences between these sites and even at 1024kbps calls we are experiencing delays and jitter. I'm hoping to exonerate the network with some testing.

    Read the article

  • Toshiba Satellite error 10053A0000 when re installing windows xp home on an existing windows 7 [closed]

    - by Jayapal Chandran
    I had installed windows 7 for testing. Now i want to re-install windows xp home original. I am using the toshiba installation(recovery) disk. The installation process asked a few questions. I selected the option to retain other partitions and to delete only c drive. In the next step i got this error. http://web1.toshiba.ca/support//techsupport/tsbs/all/-tsb001404.htm So, what should i do to retain my files in d drive and only allow the installation to delete c drive?

    Read the article

  • Browsing Pictures on a Mac

    - by Mr Woody
    Hi. After many years using linux, I decided to buy a mac. Now my main problem is: how do I synchronize pictures within my linux machines and the mac? I have been using digikam in linux, and I like it because I can just browse the pictures directly from my directories (and it is easy for me to keep directories synchronized within mac and linux). I have been testing iphoto and aperture, which are quite nice but if use them, my understanding is that I have to import all the pictures into these softwares, and this doesn't seem to be the ideal solution for me. I tried picasa, but I don't find it as good as iphoto and aperture. On the other hand it allows me to browse directories, without having two copies of the same pictures. I didn't try lightroom yet, would that be a good solution? I would appreciate any suggestion on this. Thanks!

    Read the article

  • iSCSI - what's faster?

    - by Unplugme71
    I have a DroboPro that is currently connected to a GS748TS switch. Also connected to the switch is a server and few workstations. Which method would be better in performance? Add a NIC to the server. Direct connect the DroboPro via iSCSI to the new NIC. Add a NIC to the server. Create a dedicated VLAN for the new NIC and Drobo. Add a NIC to the server and attach it to a separate switch. DroboPro connects to the switch. Becomes a private network, similar to a VLAN. DroboPro has a single ethernet connection. Server has a single ethernet connection (currently). Workstations each have a single ethernet connection.

    Read the article

  • SQL Server 2005 SP3 Express Backups Incredibly Slow

    - by Adam Robinson
    I'm attempting to troubleshoot an issue with one of our customers who's using SQL Server 2005 SP3 Express to house their application data. The automatic backups that we perform when upgrading our application are timing out after 30 minutes, and I've been sitting and watching the backup take place in SSMS for about 20 minutes now and it's only gotten to 30%. The database is only slightly over 1GB, so I'm baffled as to what could be causing this sort of horrible performance. The machine is a 1.87GHz Xeon with 3GB of RAM running Windows Server 2003 R2. While that's hardly a powerful box, this seems ridiculous. Does anyone have any idea why these backups could be taking so long and, more importantly, how I can do something about it?

    Read the article

  • Keepalived takes several minutes to recover in a particular situation

    - by NathanE
    I've setup Keepalived for a master-slave style virtual IP and it seems to work well. Both are hosted in almost identical VMs. If I "pause" the VM that is running the Master. The Slave will take over, as expected, almost instantly. However if I then "unpause" the VM that runs the Master. The virtual IP will stop responding the pings. And it takes a good 4 or 5 minutes for it to start pinging again. It seems to be getting desynchronised due to the nature of the way I'm testing it (by pausing/unpausing the VMs). I admit that pausing and unpausing VMs is a slightly dodgy way to test this. But it has raised a concern for me that there could be other scenarios that cause the same undesirable behaviour. Is this expected / by design? Is there anything I can do to the config to improve it? Thanks.

    Read the article

  • MySQL replicate multiple places

    - by Frederik Nielsen
    Very trick task to find a good title for this question, but here goes the q: I have a few development machines, where I develop my PHP applications on, and testing via a local webserver. This works out pretty well for each machine. However, I would like to replicate the DB from my machines to a central location. So, to sum up: DEV1 - CENTRAL DEV2 - CENTRAL DEV3 - CENTRAL CENTRAL - DEV1 CENTRAL - DEV2 CENTRAL - DEV3 I hope this makes sense, as I cannot find an easy way to tell it. Basically, it is a 2-way replication, where all 4 databases contain the same info, and each of them can be updated locally, to then be pushed out to the others. Is this actually doable? All my dev machines are running Windows 7, and my central DB server is running CentOS 6.

    Read the article

  • periodically overridding NTP for simulation purposes

    - by Gerard
    I have this situation: NTP is used to sync time on a set of Windows 7 and Server 2008 machines. Nothing out of the ordinary about this. periodically on this system, the time needs to be changed for testing/training purposes (it is a training simulation system that has a lot of time-dependent operations). My question: As NTP in general does not really like big time jumps or changes AFAIK, is there a standard way this could be set up to allow the clock to be changed at the root NTP server in the system and have it propagate through the system in a reasonable amount of time (a minute or two?) It is not acceptable to disable and/or restart all NTP client services to achieve this. Any ideas? It would be nice to do this without writing some kind of custom script to disable services and update clocks all over the place. Thanks in advance.

    Read the article

  • How do I unmount a tmpfs that is missing from /etc/mtab?

    - by vrinek
    I have the following line in /etc/fstab: none /home/hydra/tmp tmpfs user,noauto,size=1000M,uid=1001,gid=1001 0 0 I can do mount ~/tmp as user hydra and it gets mounted ok. The only problem is that even thought it gets added to /proc/mounts, it does not get added to /etc/mtab. When I try a umount ~/tmp (again as hydra) it complains: umount: /home/hydra/tmp is not mounted (according to mtab) And when I try -f or -n, it complains that I am not root. Some more info on the system that manifests this problem: On sudo umount /home/hydra/tmp, the fs gets unmounted (I think I needed to used -f too) Debian version is testing mount --version - mount from util-linux 2.19.1 (with libblkid and selinux support) ls -l /etc/mtab - -rw-r--r-- 1 root root 921 Nov 14 09:08 /etc/mtab cat /proc/mounts | grep rootfs - rootfs / rootfs rw 0 0 /home, /home/hydra nor /home/hydra/tmp are symbolic links

    Read the article

  • My HP-Vista based laptop has become very slow recently

    - by goldenmean
    My HP laptop which has Vista Home premium. When I try to start Firefox, internet explorer, it becomes very slow. No other app. When i checked the Performance in Task Manager. It shows the Physical memory , Free as 0 bytes, almost always. This has been recently. Earlier it didn't used to be zero. Laptop has 2GB of RAM. I have nothing running in my tray except - Sound control, Laptop power plan indicator,Network status indicator. There are no other processes whose memory usage adds up to so high to make Free memory as 0. Then what could be hogging the memory and make the laptop very slow. Any pointers would help as it is crawling at the moment.

    Read the article

  • MongoDB on FreeBSD

    - by Hartator
    We are currently using MongoDB 2.0.0 on MacOS but our servers are running FreeBSD. The most recent port of MongoDB is the 1.8.3 version. I have tried to compile the 2.0.0 by hand but I came across errors that I didn't manage to fix. I came across on the Internet a few old resources which are saying that MongoDB does not run well on FreeBSD mainly for performance issue (memory mapped files). Is that true ? Does it mean we have to switch our server to another OS ? Thanks for your opinions! Sources : http://groups.google.com/group/mongodb-user/browse_thread/thread/8131b7e5a5c710d9 http://ivoras.net/blog/tree/2009-11-05.a-short-time-with-mongodb.html

    Read the article

  • Elastic Load Balancer & SSL termination

    - by Aaron Scruggs
    I am setting up a Rails app on AWS that: 1) all traffic must ssl encrypted 2) will highly fluctuate in traffic on a weekly basis 3) will by maintained by someone that is a stronger coder than sysadmin, but will be responsible for both I am thinking that SSL termination on an elastic load balancer backed by small ec2 instances running nginx and unicorn A small subset of the requests will take longer than 10s, because of this I am also debating using 'thin' instead of 'unicorn'. My question is this: Is this sane? I am stepping into a quagmire of cost, maintainability, security or performance problems?

    Read the article

  • Which VMs are easier to install/configure and more performant?

    - by André Alçada Padez
    well, i hope this doesn't get categorized as a boating question, but it really is related to programming. I have windows XP, and i am going to have to have a VM running: Windows 7 Visual Studio 2008 Sql Server 2008 IIS 7 (8 in a little while) Wamp Photoshop CS5 etc... so i was wondering what should i use to be easier to install and configure, and best performance: Virtual Box or Microsoft's Virtual Machine? Thank you Well i tried Virtual Box, it's always crashing for some reason. I think i'm going to try Virtual PC, just to stick to an all Microsoft Solution.

    Read the article

  • virtual machines and cryptography

    - by Unknown
    I suspect I'm a bit offtopic with the site mission, but it seems me more fitting for the question than stackoverflow i'm in preparing to create a vm with sensible data (personal use, it will be a web+mail+... appliance of sorts), i'd like to protect the data even with cryptography; the final choice have to be cross-platform for the host basically, I have to choose between guest system-level cryptography (say, dm-crypt or similar) or host level cryptography with truecrypt. do you think that the "truecrypt-volume contained virtualized disks" approach will hit the i/o performance of the vm badly (and therefore dm-crypt like approaches into the vm would be better), or is it doable? I'd like to protect all the guest data, not only my personal data, to be able to suspend the vm freely without worrying for the swap partition, etc

    Read the article

  • Free Google Docs alternative compatible with Opera

    - by f4k3
    Well gDocs isn't working ok, too many bugs and it's pretty slow (especially when saving documents). I have tried several alternatives: - Zoho - they say it's not compatible with Opera and it;s true - you even can't CTRL+V text - Buzzword - it's really slow, and some functions don't work properly (on all browsers) for example "increase indent" increases a random text indent - Etherpad - was taken over by google and is shut down - Peepel - it's a cool thing, almost a free virtual desktop in a browser but it's buggy - a saved a document, tried to open it end an error occured. the document was lost - OpenGoo - went commercial At the moment I'm testing ThinkFree Online - it'a a bit slow (Java :P) and some minor things don't work (like drag a toolbar) but it has cool functionalities (almost like OpenOffice! which I use at home), it actually works with opera (create, save, edit document). Maybe I'll try Scribd but is it a office/share platform? any other worth trying??

    Read the article

  • upgrading my computer system for office use

    - by denise ellul
    Presently I have the under-mentioned computer system. What should be changed and upgraded with the following products that I own presently? I am interested in performance issues related to cache memory, bus speed, RAM, CAS latency as well as other considerations. Thanks for your help. Processor (CPU): Intel Celeron Dual Core E3300 2.5 GHz Motherboard: Asus P5QPL-AM G41 Main Memory (RAM): 2 GB Team Elite DDR2 PC8000 Case: Coolermaster RC330 Power Supply Unit: 500W EZ-Cool Standard Storage Device (Hard Drive): 500GB Samsung Video Card: Intel GMA X4500 (On-board) Optical Drive: LG GH22NS50 Sound Card: AC 97 (On – board) Card Reader: Akasa Black TFT Monitor: 19” View Sonic Speakers: Logitech S120 2.0

    Read the article

  • Will Vimperator always be this awesome?

    - by Martín Fixman
    About a week ago I started using Vim, and fell completely in love with it. However, today I installed the Vimperator extension on Firefox, and through there are some problems (all of which will be solved after using it until I get used to it), I found it great. However, I'm still in the "Holy fuck this is totally awesome" phase of software testing, and in some time will go back to the "I have this thing" phase. Just to be sure, will it be a good idea to use it regularly? I want to hear experiences about users and ex-users.

    Read the article

  • SSH Port Forward 22

    - by j1199dm
    I'm trying to set up the following: At work I want to create a local port that will forward to port 22 on my home server. ssh -L 56879:home:22 username@home -p 443 right now I'm testing this on my two machines at home, my ubuntu server and the other my iMac. iMac: 192.168.1.104 ubuntu: 192.168.1.103 iMac - ssh -p 443 -L 56879:192.168.1.103:22 [email protected] in my ~/.ssh/config on my iMac I have port set to 56879. so when I do git pull remoteserver:/path/to/repo.git on my iMac git will use ssh client on my iMac and use port 56879 since setup in config which should forward to 22 on my ubuntu machine. I keep getting connection refused? Any ideas?

    Read the article

  • Prevent Ultrabay HDD from ejecting on sleep

    - by Bryce Evans
    I have a lenovo T430s thinkpad with a small SSD primary drive and 500gb ultrabay drive. When I put the computer to sleep and then return, I get the message titled "problem ejecting < drive name " "Windows can't stop your 'Generic volume' device because a program is still using it." This pop up is very annoying every time every time I use the computer. I don't want to disable write caching [D:Hardware[drive]policiesquick removal] because I want best performance and never remove the drive. Any ways to avoid this pop up?

    Read the article

  • Fine-tuning a LNMP stack

    - by Norman
    I'm in the process of setting up a server with 4GB RAM and 2 CPUs. The stack will be CentOS + NGINX + MySQL + PHP (with APC) and spawn-fcgi. It will be used to serve 10 Wordpress blogs, 3 of which receive about 20,000 hits per day. Each Wordpress instance is equipped with the W3 TotalCache. I have a few variables to play with: NGINX (How many worker_processes, worker_connections, etc) PHP (What parameters in php.ini should I change? What about apc?) Spawn-fcgi (Right now I have 6 php-cgi spawned. How many of them should I have?) I realize it's hard to tell without testing, but if you could please provide me with some ballpark numbers, that would be helpful too.

    Read the article

  • chmod -R 777 /. - RHEL 5.5

    - by user1263746
    A shell script testing went bad and it issued chmod -R 777 /. to the system, instead of chmod -R 777 ./ and as expected it wiped the critical meta data. We have turned off the system and it will not function properly the next time it is turned on. I am told that rpm --setperms -a rpm --setugids -a should atleast fix the permission of the packages maintained by rpm. Is it worth doing? And is there any script available which will copy the permission from an identical system? To atleast get the box working. The Box is running RHEL5.5 Thanks!

    Read the article

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • Why is it a bad idea to use multiple NAT layers or is it?

    - by iamrohitbanga
    The computer network of an organization has a NAT with 192.168/16 IP address range. There is a department with a server that has an IP address 192.168.x.y and this server handles hosts of this department with another NAT with the IP address range 172.16/16. Thus there are 2 layers of NAT. Why don't they have subnetting instead. This would allow easy routing. I feel multiple layers of NAT can cause performance losses. Could you please help me compare the two design strategies.

    Read the article

  • PowerShell Remoting: No credentials are available in the security package

    - by TheSciz
    I'm trying to use the following script: $password = ConvertTo-SecureString "xxxx" -AsPlainText -Force $cred = New-Object System.Management.Automation.PSCredential("domain\Administrator", $password) $session = New-PSSession 192.168.xxx.xxx -Credential $cred Invoke-Command -Session $session -ScriptBlock { New-Cluster -Name "ClusterTest" -Node HOSTNAME } To remotely create a cluster (it's for testing purposes) on a Windows Server 2012 VM. I'm getting the following error: An error occurred while performing the operation. An error occurred while creating the cluster 'ClusterTest'. An error occurred creating cluster 'ClusterTest'. No credentials are available in the security package + CategoryInfo : NotSpecified: (:) [New-Cluster], ClusterCmdletException + FullyQualifiedErrorId : New-Cluster,Microsoft.FailoverClusters.PowerShell.NewClusterCommand All of my other remote commands (installing/making changes to DNS, DHCP, NPAS, GP, etc) work without an issue. Why is this one any different? The only difference is in the -ScriptBlock tag. Help!

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >