Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 166/837 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

  • Zabbix - Some of the monitored items don't refreh

    - by Niro
    I'm experiencing a strange issue with Zabbix monitoring a MySQL server. Most of the data from the server such as MySQL queries per second and MySQL uptime , Buffers memory etc. update nicely while some data like CPU iowait time (avg1) , Host local time ,MySQL number of threads and other items which were monitored in the past has last check time of about a week ago. I can't find any logic in this, for example Mysql number of threads and Mysql queries per second are obtained in a similar way so it does not make sense one of them is monitored and one is not. Please help- how can I fix this? Update - I used zabbix_get from the zabbix server to check one of the items on the zabbix client and it works so the problem must be on the zabbix server side

    Read the article

  • Mysql queries stuck in "sending data" state

    - by MarkPW
    I'm running a Litespeed web server and a database server (2 x Clovertown 5335) with MySQL 5.1.52-log (running on Cent OS 4.5 and 4.6 32bit respectively). Last week, I upgraded from 5.0.51a-community-log and since then I've been having a problem whereby my database server's load starts increasing for no apparent reason. Running SHOW PROCESSLIST; I see all of my SQL_CACHE queries that use a wildcard in WHERE, backing up and getting stuck in "sending data" state. However, other queries that use SQL_CACHE but no wildcard do not get caught up in this. To get things going again, the first time it happens (after about 24 hours), I have to restart mysql. 4/5 times it will re-occur after about 20 minutes or so and this time and for subsequent occurrences it is not necessary to restart mysql. Simply stopping the web server for a few minutes will suffice while I allow the stuck queries to clear themselves up. I had no such problem with the the previous mysql install. What could be causing the issue and how do I resolve it? Thanks

    Read the article

  • Better way to design a database

    - by cMinor
    I have a conceptual problem and I would like to get your ideas on how I'll be able to do what I am aiming. My goal is to create a database with information of persons who work at a place depending on their profession and skills,and keep control of salary and projects (how much would cost summing all the hours of work) I have 3 categories which can have subcategories: Outsourcing Technician welder turner assistant Administrative supervisor manager So each person has its information and the projects they are working on, also one person may do several jobs... I was thinking about having 5 tables (EMPLOYEE, SKILLS, PROYECTS, SALARY, PROFESSION) but I guess there is a better way of doing this. create table Employee ( PRIMARY KEY [Person_ID] int(10), [Name] varchar(30), [sex] varchar(10), [address] varchar(10), [profession] varchar(10), [Skills_ID] int(10), [Proyect_ID] int(10), [Salary_ID] int(10), [Salary] float ) create table Skills ( PRIMARY KEY [Skills_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID), [Skills_pay] float(10), [Comments] varchar(50) ) create table Proyects ( PRIMARY KEY [Proyect_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) create table Salary ( PRIMARY KEY [Salary_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) So to get the total amount of the cost of a project I would just sum the working hours of each employee envolved and sum some extra costs in an aggregate query. Is there a way to do this in a more efficient way? What to add or delete of this small model? I guess I am missing something in the salary - maybe I need another table for that?

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • Linux laptop encryption

    - by kaerast
    What are my options for encrypting the /home directories of my Ubuntu laptops? They are currently setup without any encryption and some have /home as a separate partition whilst others don't. Most of these laptops are single-user standalone laptops which are out on the road a lot. Is ecryptfs and the encrypted Private directory good enough or are there better, more secure, options? If somebody got hold of the laptop, how easy would it be for them to gain access to the encrypted files? Similar questions for encrypted lvm, truecrypt and any other solution I may not be aware of.

    Read the article

  • How to convert ext3 partition to use encrypted file system without loosing data?

    - by User1
    My embedded Linux device have 2 partitions: small root partition containing OS. big data partition which uses ext3 I want to encrypt the data partition by using encrypted file system. I don't want loose any data of the partition. Size of the root partition is too small to hold all data of the data partition. It is not possible to use any external data storage. Is there any tools that can convert filesystem of the data partition from ext3 to encrypted fs without copying all files to other place?

    Read the article

  • Lot of FIN_WAIT2, CLOSE_WAIT , LAST_ACK and TIME_WAIT in Haproxy

    - by Tux
    We are running haproxy in production for around 10k+ concurrent users . But we are seeing lot of FIN_WAIT2, CLOSE_WAIT , LAST_ACK and TIME_WAIT in the netstat output. This output is on a 8G ubuntu-12.04 node. 8046 CLOSE_WAIT 1 CLOSING 1 established) 40869 ESTABLISHED 1212 FIN_WAIT1 7575 FIN_WAIT2 1 Foreign 2252 LAST_ACK 7 LISTEN 143 SYN_RECV 4920 TIME_WAIT Can someone please tell me what tweaking i need to do. Please note that all these connections are persistent connections . tcp_fin_timeout = 30 tcp_keepalive_time = 1800 Right now, the application is working fine. But wondering will be there any issues as we add more users to this haproxy node.

    Read the article

  • amazon ec2 pricing

    - by Pradyut Bhattacharya
    I m really confused. I was trying to buy hosting at amazon ec2. My site will not be having much of a traffic and i will be installing glassfish and mysql. Usage will be 1gb of ram and around less than 5gb of hardisk and same bandwidth. As mine is a startup, the number of hits per day would be less than 20hits per day, each hit having around 10mins time. How should i calculate the price on the ec2 calculator. Thanks

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • SQL server queries are really slow only on first run

    - by JoelFan
    Somewhat strange problem... when I start my .NET app for the first time after rebooting my machine, the SQL Server queries are really slow... when I pause the debugger, I notice that it's hanging on getting the response from the query. This only happens when connecting to a remote SQL server (2008)... if I connect to one on my local machine, it's fine. Also, if I restart the app, it works fast, even off the remote SQL server, and subsequent runs are also fine. The only problem is when I connect to a remote SQL server for the first time after rebooting my machine. What's more, I have even noticed this same exact behavior with a 3rd party app (also .NET) that also connects to a remote SQL server. Another piece of info... this has only started hapenning since I upgraded my machine from XP to Win7 (64 bit). Also, other developers on my team who upgraded to Win7 are seeing the same behavior (both with the app we're developing and the 3rd party .NET app). (copied from http://stackoverflow.com/questions/2014814/sql-server-queries-are-really-slow-only-on-first-run )

    Read the article

  • Windows 8 will only recognize the Blu Ray ROM, if the install disk is present at boot time

    - by aceinthehole
    If I have the install disk in the blu ray ROM drive at boot time and subsequently remove the disk and replace it with blu ray media everything functions as I'd expect. However, if I have no media present, or another disk in the drive at boot time, then windows 8 does not seem to recognize that the blu ray player is even present in the computer. It is not present in the 'my computer' screen, device manager does not show the player, and scanning for new hardware yields nothing. It seems that the driver is installed and working as expect, what is it about having the windows 8 install disk in the drive or not that would cause this kind of behavior?

    Read the article

  • Windows 8 slow after refreshing and rebooting

    - by Dan Drews
    First, I apologize if this is in the wrong place. I use S/O a lot but not the other sites much. I have an HP Split X2 that has been very choppy as of late (takes several seconds to respond to any form of input), so I went ahead and did a system refresh. After the refresh everything ran very fast as it should, then when I went to download my old apps I needed to reboot. After rebooting, it went back to the choppiness. Does anybody have any thoughts on what this could be?

    Read the article

  • does my machine configuration make sense?

    - by user1227914
    i couldn't think of a better place to ask this question, so here it goes. we're putting together a dedicated server for a website that will initially host the web server and the mysql database. as the website grows, we'll move the database to a different server and this machine will eventually only server the actual website. so the question is ...does my configuration look okay? it's the first time i'm building a server from scratch so i want to make sure i don't combine components that don't fit or something. things like ..do the drives i picked work for the hot swap ..etc. what do you guys think? am i good to go with this configuration? :) Chassis: Supermicro SuperServer 6016T-MTHF (6x DDR3 SDRAM - ECC DIMM 240-pin, 2x LGA1366 Socket, Power Provided: 600 Watt, 4 (free) x hot-swap - 3.5") CPU: Intel BX80614E5620 Xeon E5620 Processor - 4 Core, 2.40GHz, LGA 1366, 5.86GT/s QPI 12MB Cache, 64-Bit, 80W, HyperThreading Memory: Crucial CT51272BB1339 4GB PC10600 DDR3 Memory - 1333MHz, ECC, Registered, 1x4096MB (possibly 3 or 4 of them) Hard Drives: Western Digital WD2002FAEX Caviar Black Hard Drive - 2TB, 3.5", SATA 6Gbps, 7200 RPM, 64MB (possibly 2 or 3). thank you very much for any professional advice :)

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • Major computer speed problems

    - by Glen654
    I've been running Windows 7 on my laptop for about a year now, and have had no issues regarding speed. About a month ago, my computer had what I refer to now as an "episode" where it runs extremely slow, when I open Task Manager I see no significant processes running, nothing out of the ordinary, but my computer is at 100% CPU usage. Usually restarting fixed this problem, but it seems to have gotten worse to the point where restarting does not fix this problem, and it's interfering with my work. What should I do?

    Read the article

  • Microsoft Excel 2007 constantly calculating sheets

    - by acseven
    I believe this happening for two weeks now: Excel 2007 (on Windows XP) is acting funny on my computer; any medium sized sheet with some formulas in it takes a significant amount of time recalculating. I can see this because the "calculating: 2 processors xx%" message was almost unseen before and now it appears on most operations like calculating a formula (on one cell), saving, previewing, etc. If the sheet is complex (lots of formulas) I have to disable automatic calculations because excel renders as unusable - it hangs for a really long time, measureable in minutes. Any idea on what may be causing this? ps: this is a Core2 Duo computer with 2 Gb of RAM

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • USB key to pass password in Centos 6

    - by Andrew
    I had a room mate that put a livecd in my desktop and looked around on my machine. I caught him in the act and threw him out. I haven't had a room mate for a while now and so as to avoid the livecd issue again I encrypted the hard drive, the machine is running centos 6.3. Is there anyway that I can avoid typing the password in each time if I have usb key in the machine to feed the password to the system? Additional question. Is there anything you can suggest to solve the problem I have ? Thanks

    Read the article

  • Why the huge difference between etch and lenny MySQL

    - by rmarimon
    I've been working on a program for the last year. The development environment is working with a database in MySQL running on debian etch version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (i486) using readline 5.2. The production environment is working on debian lenny with version mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2. I was just timing some database access and what takes in the development environment 150 seconds, takes 300 in the production environment. I checked the /etc/mysql/my.cnf files on both systems and the only differences are # development bind-address = 10.168.1.82 log_bin = /var/log/mysql/mysql-bin.log # production bind-address = 127.0.0.1 myisam-recover = BACKUP #log_bin = /var/log/mysql/mysql-bin.log I dump a database from the production and load it into the development and with the same server everything takes half the time !!! What should I check?

    Read the article

  • WinXP: File Record Segment nnnn is unreadable?

    - by chris
    I'm pretty sure that the drive is toast, except for the fact that this error is only showing up on one partition (it's out of a Dell computer, and has a couple of Dell partitions, which boot and work fine.) I've already purchased another hard drive, and re-installed WinXP, but when I re-connect this drive and reboot, I'm getting thousands of these errors. Is there any chance of recovering any files off this? Should I prevent XP from trying to resolve the problems?

    Read the article

  • Cleaning up temp files in OSX

    - by deddebme
    I was a Windows person for more than 10 years. Around 4 months ago, I switched to Mac, and I have never looked back. But there is one thing that bothers me, which is my Mac partition volume is losing space slowly and gradually. I am pretty sure there are a lot of orphaned temporary files laying around in the volume. I know where to find the obsoleted temp files in my Windows partition, how about in Mac OSX?

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >