Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 546/883 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Installing SSL certs with nginx on Amazon EC2

    - by Ethan
    I finally got a cert from an authority and am struggling to get things working. I've created the appropriate combined certificate (personal + intermediate + root) and nginx is pointing to it. I got an elastic IP and connected it to my EC2 instance. My DNS records point to that IP. But when I point the browser to the hostname, I get the standard "Connection Untrusted" bit, with ssl_error_bad_cert_domain. Port 443 is open - I can get to the site over https if I ignore the warning. Weird thing is, under technical details, it lists the domain I tried to access as valid! When I try and diagnose with ssl testing sites, they don't even detect a certificate! What am I missing here? domain is yanlj.coinculture.info. Note I've got coinculture.info running on a home server without a dedicated IP and have the same problem, but I'll be moving that to the same EC2 instance as soon as I figure this thing out. I thought the elastic IP would solve things but it hasn't

    Read the article

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • 912 stream processor available in OpenCL

    - by tugrul büyükisik
    I am thinking of assembling this system: AMD CPU (A8-3870 APU which has Radeon HD 6550D inside: 400 stream processors:xxx GFLOPS) nearly 110$ AMD Graphics card: HD 7750 (512 stream processors:819 GFLOPS peak performance) nearly 170$ Appropriate ram (1600MHz bus) Mainboard What GFLOPS level can I reach as a stable mode with using OpenCL and similar programs? Can I use all 912 stream processors at the same time? I am not trying to do a VS question. I need to know what could be better for scientific computing (%75 of the time) and gaming (%25 of the time) because I have a low budget. With "scientific calculations" I mean fluid dynamics/solid state physics simulating; with games I mean those that need openCL and PhysX.

    Read the article

  • Will Vimperator always be this awesome?

    - by Martín Fixman
    About a week ago I started using Vim, and fell completely in love with it. However, today I installed the Vimperator extension on Firefox, and through there are some problems (all of which will be solved after using it until I get used to it), I found it great. However, I'm still in the "Holy fuck this is totally awesome" phase of software testing, and in some time will go back to the "I have this thing" phase. Just to be sure, will it be a good idea to use it regularly? I want to hear experiences about users and ex-users.

    Read the article

  • Postfix, saslauthd, mysql, smtp authentication problems

    - by italiansoda
    Trying to get authentication on my mail server (ubuntu 10.04) running but am having trouble. I have a server with postfix for smtp setup, imap server with courier setup. My postfix authentication is using cyrus (I haven't tried dovecot really) saslauth. The user name and password is stored in a MySql database. Logging in with imap-ssl works on a remote client (thunderbird), and I can read my mail. I can't get the SMTP side working, and have focused the issue down to saslauth. Testing with testsaslauthd -u 'username' -p 'passowrd' -s smtp returns connect() : Permission denied the password in the database is encrypted and I guess this testsaslauthd will take a plain text password and encrypt it. Looking for someone to walk me through getting this working. Im new to the mail server, and have never got one fully working. Thanks. Ask me which log files I should look at/post, which tests to run, permissions to check.

    Read the article

  • Add custom Virtual Machine icons to VirtualBox

    - by Iszi
    I'd like to use custom icons to better distinguish machines running the same OS from each other, in VirtualBox. is this possible? If so, what file(s) do I need to add/edit? Examples: I've got two Windows 7 VMs. One I use as a sandbox for testing various things, and the other I use for when I need to connect to work (ideally, my personal system - the host machine - never directly connects). I'd like to have perhaps a beaker for the sandbox, and a suitcase for the work machine. I've got two Ubuntu VMs. One is BackTrack Linux, the other is a build I'm using to learn more about the OS. I wouldn't mind keeping the regular icon for the latter, but I'd like to use one of BackTrack's icons or images for the former. I'm running VirtualBox 4.1.6 on Windows 7 x64.

    Read the article

  • Test site speed

    - by Elad Lachmi
    I am test driving an Akmai CDN architecture and before committing to buy, I would like to gauge the real performance gain from the acceleration feature. What would be the best MO for doing speed tests from different locations around the world? I would like to test the page load speed and not just the server response time. I would like to test speed from as many edge locations as possible. I do not mind a paid service as well, if it is optimal. Thank you!

    Read the article

  • SSH Port Forward 22

    - by j1199dm
    I'm trying to set up the following: At work I want to create a local port that will forward to port 22 on my home server. ssh -L 56879:home:22 username@home -p 443 right now I'm testing this on my two machines at home, my ubuntu server and the other my iMac. iMac: 192.168.1.104 ubuntu: 192.168.1.103 iMac - ssh -p 443 -L 56879:192.168.1.103:22 [email protected] in my ~/.ssh/config on my iMac I have port set to 56879. so when I do git pull remoteserver:/path/to/repo.git on my iMac git will use ssh client on my iMac and use port 56879 since setup in config which should forward to 22 on my ubuntu machine. I keep getting connection refused? Any ideas?

    Read the article

  • information about /proc/pid/sched

    - by redeye
    Not sure this is the right place for this question, but here goes: I'm trying to make some sense of the /proc/pid/sched and /proc/pid/task/tid/sched files for a highly threaded server process, however I was not able to find a good explanation of how to interpret this file ( just a few bits here: http://knol.google.com/k/linux-performance-tuning-and-measurement# ) . I assume this entry in procfs is related to newer versions of the kernel that run with the CFS scheduler? CentOS distro running on a 2.6.24.7-149.el5rt kernel version with preempt rt patch. Any thoughts?

    Read the article

  • Looking for a free or open-source burner emulator [closed]

    - by Jared Harley
    Possible Duplicate: Virtual CDR driver I am looking for a free or open-source virtual CD/DVD emulator to run in a Windows environment. What I want is similar to what SlySoft's Virtual Clone Drive or Daemon Tools provides, but the emulated drive needs to be a burner of some type. The burner should be able to save disc images (.iso, .ccd, etc) to my harddrive - basically, the same as if I burned the files to a CD-R, and then ripped them back to a disc image. I have already looked around some and come across 2 - DVD neXt COPY iTurns and NoteBurner M4P. Both of these programs create a virtual CD-RW drive, but they are integrated into their product (for burning from iTunes to create mp3 files) and cannot create disc images. I am currently writing a piece of software that will have the capability to burn disc images onto CDs/DVDs, and I don't want to end up with a 100 coasters while I'm testing my software. Anyone have any ideas? Related ServerFault queston: Create netbook recovery image without DVD burner (virtual burner?)

    Read the article

  • How to enter a bash script at the command line, but not process the script until the entire script h

    - by MHGL
    I am performing some interactive testing using HP's QuickTest Professional and Linux. I am connecting via SSH and feeding the BASH script lines directly into the command line. The problem I'm having is that the script executes as it is entered. I'm attempting to find a way that I can feed the script to the command line, but save execution until the entire script is complete. Anyone have any experience around doing this? I'll admit, it isn't the ideal way to perform this, but it's what I'm faced with at the moment. Any other suggestions are welcome. Thanks!

    Read the article

  • Accessing multiple local HD's or RAID with ESXi 4.0

    - by Shawn Anderson
    How to I get additional HD's to be recognized and used by ESXi 4.0. When I purchased my system I had two 2TB HD's, but when I installed ESXi it only recognized one of them. I'm happy to get whatever number of drives that I need (I have a four bay SATA in my Dell T310). What are some options? RAID? If so, is it supported. I guess I would need hardware instead of software since ESXi is so small. The VMWare forums (where I've lived for the last two days) are a charlie foxtrot of outdated and conflicting info. I want to utilize my T310, with 32 GB RAM, 2.8GHz quad core to run many lab Windows VM's. I don't need production level availability but I do want decent performance, even though it's in a lab environment. A huge thanks to Jim B., Zypher, Helvick, and Jeff Hengesbach who posted answers to my earlier predecessor question on why ESXi was so sluggish.

    Read the article

  • First time setting up a MySQL database.

    - by Wilduck
    In trying to learn how to work with the LAMP stack, I've hit a wall with MySQL. I can't seem to find a good reference for the first time setup of MySQL to be used with Apache and python. So, my question is four-fold: 1) Under what circumstances should I create my first database. That is, what user do I use (Apache's http user? root?) 2)How do permissions work? 3) Do I have to do anything on the MySQL side to make MySQL talk to Apache, or MySQL to talk to Python/Django? 4) Is there a good resource online that describes setting all of this up? I've found a bunch for using a database once it's in place, but none for the initial setup? Notes: I'm trying to run my LAMP stack on a dedicated little box for testing/learning purposes only, so I don't have access to any DBA that could help me, as much as I'd like one.

    Read the article

  • Fine-tuning a LNMP stack

    - by Norman
    I'm in the process of setting up a server with 4GB RAM and 2 CPUs. The stack will be CentOS + NGINX + MySQL + PHP (with APC) and spawn-fcgi. It will be used to serve 10 Wordpress blogs, 3 of which receive about 20,000 hits per day. Each Wordpress instance is equipped with the W3 TotalCache. I have a few variables to play with: NGINX (How many worker_processes, worker_connections, etc) PHP (What parameters in php.ini should I change? What about apc?) Spawn-fcgi (Right now I have 6 php-cgi spawned. How many of them should I have?) I realize it's hard to tell without testing, but if you could please provide me with some ballpark numbers, that would be helpful too.

    Read the article

  • Why is it a bad idea to use multiple NAT layers or is it?

    - by iamrohitbanga
    The computer network of an organization has a NAT with 192.168/16 IP address range. There is a department with a server that has an IP address 192.168.x.y and this server handles hosts of this department with another NAT with the IP address range 172.16/16. Thus there are 2 layers of NAT. Why don't they have subnetting instead. This would allow easy routing. I feel multiple layers of NAT can cause performance losses. Could you please help me compare the two design strategies.

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • why udp client work when wirshark capture?

    - by herzl shemuelian
    I have two machine A,B windows 7 os .I connect them end to end and try run a performance test by using tcpreplay. step 1) I check conectivity between to point by netcat In A i run nc -lvup 5432 when I run on B nc -u 1.2.3.4 5432 I can send data from B to A step 2) when in I run tcpreplay in B tcpreplay -i %0 myudp.pcap in A I don't recevice any data . when I open wireshark in A then my nc can read data why? I check dst mac and dst ip in pcap file they are correct. is importan udp src mac or src ip for udp how that I open udp server ?

    Read the article

  • Are animated GIFs supported in Google Chrome?

    - by user30852
    I have recently been testing a website and found animated gif images that seem to show fine in IE and Firefox but in Google Chrome they only show briefly and then dissapear! This happens if I view the image on the page or view the file directly. Are there any reported problems in displaying GIFs in Chrome, or is it just being fussy? There seemed to have been some problems in older versions of Chrome, but it's hard to believe something as simple as this wouldn't have been fixed by now. The version of Google Chrome I am using is: 4.1.249.1021 Not sure if this is relevant, but some info about the image: Width: 216 pixels Height: 36 pixels Horizontal resolution: 96dpi Vertical resolution: 96dpi Bit Depth: 32 Frame Count: 3 EDIT: Seems to be a problem relating to the latest beta version of Chrome, as it works fine in 4.0.249

    Read the article

  • setting up a WGR614v7 behind a linux box

    - by commodore fancypants
    Here's the setup, I have an openSUSE box with 2 NICs, one goes to my home network router, the other has DHCP running and it attached to a wireless router. I'm trying to get this setup to work before I switch to the linux box as my home network router. My DHCP will offer the wireless router (a WGR614v7) an address, but anything that connects to the wireless router ends up with a APIPA address. I have all the firewalls on the wireless network turned off as well as the wireless router's own DHCP. The linux box isn't offering addresses to anything past the wireless router. Is this a problem with the router or my DHCP setup? For testing purposes, I have both NICs set in the internal zone and I've tried wireless and wired connections to the WGR614v7 both to no avail.

    Read the article

  • Why my new GTX 660m's clock drops drastically after running few seconds

    - by trVoldemort
    I bought a Lenovo Y580 laptop few days ago, this model is equipped with GTX 660m graphics card. However, the game performance is unbelievably poor since it out from the box. I realized there is something wrong with this graphics card. I downloaded GPU-z, and did a simple test. And I was shocked by the fact that my GTX 660m graphics card is running at 135.0mhz core clock. (It should be 835mhz at least!) Even the integrated graphics card "Intel HD graphics 4000" can run at 650mhz. Further examining showed that in the first few seconds GTX 660m was actually running at 835mhz, however the core temperature quickly reached 90+°C and the clock (maybe) automatically drop to 135.0mhz. This is very strange. Anyone has any idea what's going on here?

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration?

    Read the article

  • How do UEFI and virtual machines relate to each other?

    - by Iterator
    I am trying to get my head around UEFI (Unified Extensible Firmware Interface) and it's not entirely clear to me how this affects virtual machines. Thus, there are three parts to this question: Is UEFI an advance in hardware support for virtualization? All other things being equal, would a machine with UEFI be more likely to run a virtual machine more efficiently than one without, or does UEFI cause any performance hits that negate any speed improvements from a virtual machine? Would the difference in execution be visible to code running in a virtual machine? (In theory, it shouldn't, but in practice?)

    Read the article

  • Outlook 2010 + Move IMAP PST file = Outlook data file cannot be accessed.

    - by GWB
    I set up a new IMAP account in Outlook 2010. It works but creates IMAP PST file in C:\Users\User\AppData\Local\Microsoft\Outlook. I want the file on my data drive in D:\Users\User\Documents\Outlook Files (the same folder where outlook automatically creates the local Outlook PST. I followed the instructions here to move the IMAP PST. Testing the account (send/receive) works fine, but if I try to manually send an email I get error 0x8004010F Outlook data file cannot be accessed. I've tried repairing the PST using SCANPST (it always finds errors), and deleting and recreating the account but I get the same error. If I move the PST file back, it works again, but this is not ideal. Note: I don't think this is a duplicate of this question as the cause is different and the solution does not help.

    Read the article

  • What tools can be used to monitor a web application? Beyond "doesn't 404"

    - by Freiheit
    I have an internal web application that has recently gone through a major version upgrade. I would like to monitor this application over the weekend and look for 'soft' errors. I will still need to spot check things by hand, but there are some common failure patterns that I think I can automate. Examples include data with bad formatting, blank rows in tables (indicates missing non-critical data), patterns for identifiers ("TEST" means one of my devs left a testing feed on), etc. I think there are applications out there that can be scripted to do things like: 1. log in 2. Go to $URL 3. select 3rd link in $LIST or $PATTERN 4. Check HTML from that link for $PATTERNS 5. Email report Are these goals sane? What applications/tools can help with this?

    Read the article

  • wbadmin incremental system state backup

    - by user74513
    I am doing system state backups on a Windows Server 2008 R2 Enterprise (Service Pack 1) machine and expected the backups after the first one to be incremental. However with each backup a new directory with vhd files are created and the vhd files are almost the same size as the with the first backup. So the backups does not seem to be incremental. I used the following command to do the backup: wbadmin start systemstatebackup -backupTarget:f: I played around with the settings under "Configure Performance Settings" in the Windows Server Backup plugin in Server Manager but according to the description at the top of the dialog these settings are not applied to system state backups. Are there any settings available for wbadmin system state backup to make the backups incremental?

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >