Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 11/238 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Udp server sending only 0 bytes of data

    - by mawia
    Hi all, This is a simple Udp server.I am trying to transmit data to some clients,but unfortunetly it is unable to transmit data.Though send is running quite successfully but it is returning with a return value meaning it has send nothing.On the client they are receiving but again obviously,zero bytes. void* UdpServerStreamToClients(void *fileToServe) { int sockfd,n=0,k; struct sockaddr_in servaddr,cliaddr; socklen_t len; char dataToSend[1000]; sockfd=socket(AF_INET,SOCK_DGRAM,0); bzero(&servaddr,sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr=htonl(INADDR_ANY); servaddr.sin_port=htons(32000); bind(sockfd,(struct sockaddr *)&servaddr,sizeof(servaddr)); FILE *fp; if((fp=fopen((char*)fileToServe,"r"))==NULL) { printf("can not open file "); perror("fopen"); exit(1); } int dataRead=1; while(dataRead) { len = sizeof(cliaddr); if((dataRead=fread(dataToSend,1,500,fp))<0) { perror("fread"); exit(1); } //sleep(2); for(list<clientInfo>::iterator it=clients.begin();it!=clients.end();it++) { cliaddr.sin_family = AF_INET; inet_aton(inet_ntoa(it->addr.sin_addr),&cliaddr.sin_addr); cliaddr.sin_port = htons(it->udp_port); n=sendto(sockfd,dataToSend,sizeof(dataToSend),0,(struct sockaddr *)&cliaddr,len); cout<<"number of bytes send by udp: "<< n << endl; printf("SEND this message %d : %s to %s :%d \n",n,dataToSend,inet_ntoa(cliaddr.sin_addr), ntohs(cliaddr.sin_port)); } } } I am checking the value of sizeof(dataTosend) and it is pretty much as expected ie thousand ie the size of buffer. Are you people seeing some possible flaw in it. All of the help in this regard will be appreciated. Thanks!

    Read the article

  • Javascript Getting a string into kb format

    - by Teske
    I am new to javascript and I just wanted to convert a string into a format that a person like me can read. Here is an example of what I am trying to do... string2size(string){ //some awesome coding I have no clue how to make return awesomeAnswer } now the return should give me something like 56 bytes or 12kb or 1mb depending how much the string is. so if the string is... string = "there was an old woman who lived in a shoe"; then string2size(string) should return something like 3kb. Now I know there has been a utf8 talk and I wouldn't object to and addition of that to the function. I have tried google and Yahoo searches but they talk of using php but I really need it for javascript. I do thank anyone for their time. -Teske

    Read the article

  • How optimize code with introspection + heavy alloc on iPhone

    - by mamcx
    I have a problem. I try to display a UITable that could have 2000-20000 records (typicall numbers.) I have a SQLite database similar to the Apple contacts application. I do all the tricks I know to get a smoth scroll, but I have a problem. I load the data in 50 recods blocks. Then, when the user scroll, request next 50 until finish the list. However, load that 50 records cause a notable "pause" in loading and scrolling. Everything else works fine. I cache the data, have opaque cells, draw it by code, etc... I swap the code loading the same data in dicts and have a performance boost but wonder if I could keep my object oriented aproach and improve the actual code. This is the code I think have the performance problem: -(NSArray *) loadAndFill: (NSString *)sql theClass: (Class)cls { [self openDb]; NSMutableArray *list = [NSMutableArray array]; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DbObject *ds; Class myClass = NSClassFromString([DbObject getTableName:cls]); FMResultSet *rs = [self load:sql]; while ([rs next]) { ds = [[myClass alloc] init]; NSDictionary *props = [ds properties]; NSString *fieldType = nil; id fieldValue; for (NSString *fieldName in [props allKeys]) { fieldType = [props objectForKey: fieldName]; fieldValue = [self ValueForField:rs Name:fieldName Type:fieldType]; [ds setValue:fieldValue forKey:fieldName]; } [list addObject :ds]; [ds release]; } [rs close]; [pool drain]; return list; } And I think the main culprit is: -(id) ValueForField: (FMResultSet *)rs Name:(NSString *)fieldName Type:(NSString *)fieldType { id fieldValue = nil; if ([fieldType isEqualToString:@"i"] || // int [fieldType isEqualToString:@"I"] || // unsigned int [fieldType isEqualToString:@"s"] || // short [fieldType isEqualToString:@"S"] || // unsigned short [fieldType isEqualToString:@"f"] || // float [fieldType isEqualToString:@"d"] ) // double { fieldValue = [NSNumber numberWithInt: [rs longForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"B"]) // bool or _Bool { fieldValue = [NSNumber numberWithBool: [rs boolForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"l"] || // long [fieldType isEqualToString:@"L"] || // usigned long [fieldType isEqualToString:@"q"] || // long long [fieldType isEqualToString:@"Q"] ) // unsigned long long { fieldValue = [NSNumber numberWithLong: [rs longForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"c"] || // char [fieldType isEqualToString:@"C"] ) // unsigned char { fieldValue = [rs stringForColumn:fieldName]; //Is really a boolean? if ([fieldValue isEqualToString:@"0"] || [fieldValue isEqualToString:@"1"]) { fieldValue = [NSNumber numberWithInt: [fieldValue intValue]]; } } else if ([fieldType hasPrefix:@"@"] ) // Object { NSString *className = [fieldType substringWithRange:NSMakeRange(2, [fieldType length]-3)]; if ([className isEqualToString:@"NSString"]) { fieldValue = [rs stringForColumn:fieldName]; } else if ([className isEqualToString:@"NSDate"]) { NSDateFormatter* dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"yyyy-MM-dd'T'HH:mm:ss"]; NSString *theDate = [rs stringForColumn:fieldName]; if (theDate) { fieldValue = [dateFormatter dateFromString: theDate]; } else { fieldValue = nil; } [dateFormatter release]; } else if ([className isEqualToString:@"NSInteger"]) { fieldValue = [NSNumber numberWithInt: [rs intForColumn :fieldName]]; } else if ([className isEqualToString:@"NSDecimalNumber"]) { fieldValue = [rs stringForColumn :fieldName]; if (fieldValue) { fieldValue = [NSDecimalNumber decimalNumberWithString:[rs stringForColumn :fieldName]]; } } else if ([className isEqualToString:@"NSNumber"]) { fieldValue = [NSNumber numberWithDouble: [rs doubleForColumn:fieldName]]; } else { //Is a relationship one-to-one? if (![fieldType hasPrefix:@"NS"]) { id rel = class_createInstance(NSClassFromString(className), sizeof(unsigned)); Class theClass = [rel class]; if ([rel isKindOfClass:[DbObject class]]) { fieldValue = [rel init]; //Load the record... NSInteger Id = [rs intForColumn:[theClass relationName]]; if (Id>0) { [fieldValue release]; Db *db = [Db currentDb]; fieldValue = [db loadById: theClass theId:Id]; } } } else { NSString *error = [NSString stringWithFormat:@"Err Can't get value for field %@ of type %@", fieldName, fieldType]; NSLog(error); NSException *e = [NSException exceptionWithName:@"DBError" reason:error userInfo:nil]; @throw e; } } } return fieldValue; }

    Read the article

  • FormatException with IsolatedStorageSettings

    - by Jurgen Camilleri
    I have a problem when serializing a Dictionary<string,Person> to IsolatedStorageSettings. I'm doing the following: public Dictionary<string, Person> Names = new Dictionary<string, Person>(); if (!IsolatedStorageSettings.ApplicationSettings.Contains("Names")) { //Add to dictionary Names.Add("key", new Person(false, new System.Device.Location.GeoCoordinate(0, 0), new List<GeoCoordinate>() { new GeoCoordinate(35.8974, 14.5099), new GeoCoordinate(35.8974, 14.5099), new GeoCoordinate(35.8973, 14.5100), new GeoCoordinate(35.8973, 14.5099) })); //Serialize dictionary to IsolatedStorage IsolatedStorageSettings.ApplicationSettings.Add("Names", Names); IsolatedStorageSettings.ApplicationSettings.Save(); } Here is my Person class: [DataContract] public class Person { [DataMember] public bool Unlocked { get; set; } [DataMember] public GeoCoordinate Location { get; set; } [DataMember] public List<GeoCoordinate> Bounds { get; set; } public Person(bool unlocked, GeoCoordinate location, List<GeoCoordinate> bounds) { this.Unlocked = unlocked; this.Location = location; this.Bounds = bounds; } } The code works the first time, however on the second run I get a System.FormatException at the if condition. Any help would be highly appreciated thanks. P.S.: I tried an IsolatedStorageSettings.ApplicationSettings.Clear() but the call to Clear also gives a FormatException. I have found something new...the exception occurs twenty-five times, or at least that's how many times it shows up in the Output window. However after that, the data is deserialized perfectly. Should I be worried about the exceptions if they do not stop the execution of the program? EDIT: Here's the call stack when the exception occurs: mscorlib.dll!double.Parse(string s, System.Globalization.NumberStyles style, System.IFormatProvider provider) + 0x17 bytes System.Xml.dll!System.Xml.XmlConvert.ToDouble(string s) + 0x4b bytes System.Xml.dll!System.Xml.XmlReader.ReadContentAsDouble() + 0x1f bytes System.Runtime.Serialization.dll!System.Xml.XmlDictionaryReader.XmlWrappedReader.ReadContentAsDouble() + 0xb bytes System.Runtime.Serialization.dll!System.Xml.XmlDictionaryReader.ReadElementContentAsDouble() + 0x35 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlReaderDelegator.ReadElementContentAsDouble() + 0x19 bytes mscorlib.dll!System.Reflection.RuntimeMethodInfo.InternalInvoke(System.Reflection.RuntimeMethodInfo rtmi, object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, object parameters, System.Globalization.CultureInfo culture, bool isBinderDefault, System.Reflection.Assembly caller, bool verifyAccess, ref System.Threading.StackCrawlMark stackMark) mscorlib.dll!System.Reflection.RuntimeMethodInfo.InternalInvoke(object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, object[] parameters, System.Globalization.CultureInfo culture, ref System.Threading.StackCrawlMark stackMark) + 0x168 bytes mscorlib.dll!System.Reflection.MethodBase.Invoke(object obj, object[] parameters) + 0xa bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.ReadValue(System.Type type, string name, string ns, System.Runtime.Serialization.XmlObjectSerializerReadContext context, System.Runtime.Serialization.XmlReaderDelegator xmlReader) + 0x138 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.ReadMemberAtMemberIndex(System.Runtime.Serialization.ClassDataContract classContract, ref object objectLocal, System.Runtime.Serialization.DeserializedObject desObj) + 0xc4 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.ReadClass(System.Runtime.Serialization.DeserializedObject desObj, System.Runtime.Serialization.ClassDataContract classContract, int membersRead) + 0xf3 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.Deserialize(System.Runtime.Serialization.XmlObjectSerializerReadContext context) + 0x36 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.InitializeCallStack(System.Runtime.Serialization.DataContract clContract, System.Runtime.Serialization.XmlReaderDelegator xmlReaderDelegator, System.Runtime.Serialization.XmlObjectSerializerReadContext xmlObjContext, System.Xml.XmlDictionaryString[] memberNamesColl, System.Xml.XmlDictionaryString[] memberNamespacesColl) + 0x77 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.CollectionDataContract.ReadXmlValue(System.Runtime.Serialization.XmlReaderDelegator xmlReader, System.Runtime.Serialization.XmlObjectSerializerReadContext context) + 0x5d bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializerReadContext.ReadDataContractValue(System.Runtime.Serialization.DataContract dataContract, System.Runtime.Serialization.XmlReaderDelegator reader) + 0x3 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(System.Runtime.Serialization.XmlReaderDelegator reader, string name, string ns, ref System.Runtime.Serialization.DataContract dataContract) + 0x10e bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(System.Runtime.Serialization.XmlReaderDelegator xmlReader, System.Type declaredType, System.Runtime.Serialization.DataContract dataContract, string name, string ns) + 0xb bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.DataContractSerializer.InternalReadObject(System.Runtime.Serialization.XmlReaderDelegator xmlReader, bool verifyObjectName) + 0x124 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializer.ReadObjectHandleExceptions(System.Runtime.Serialization.XmlReaderDelegator reader, bool verifyObjectName) + 0xe bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializer.ReadObject(System.Xml.XmlDictionaryReader reader) + 0x7 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializer.ReadObject(System.IO.Stream stream) + 0x17 bytes System.Windows.dll!System.IO.IsolatedStorage.IsolatedStorageSettings.Reload() + 0xa3 bytes System.Windows.dll!System.IO.IsolatedStorage.IsolatedStorageSettings.IsolatedStorageSettings(bool useSiteSettings) + 0x20 bytes System.Windows.dll!System.IO.IsolatedStorage.IsolatedStorageSettings.ApplicationSettings.get() + 0xd bytes

    Read the article

  • “Disk /dev/xvda1 doesn't contain a valid partition table”

    - by Simpanoz
    Iam newbie to EC2 and Ubuntu 11 (EC2 Free tier Ubuntu). I have made following commands. sudo mkfs -t ext4 /dev/xvdf6 sudo mkdir /db sudo vim /etc/fstab /dev/xvdf6 /db ext4 noatime,noexec,nodiratime 0 0 sudo mount /dev/xvdf6 /db fdisk -l I got following output. Can some one guide me what I am doing wrong and how it can be rectified. Disk /dev/xvda1: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table Disk /dev/xvdf6: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders, total 12582912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvdf6 doesn't contain a valid partition table.

    Read the article

  • Troubleshooting source of heavy resource-usage on a server 2008 running multiple sites

    - by batman_man
    Hi, I am running about 10 asp.net websites on a hosted virtual server. The server runs Server 2008 - each website is backed by its own database running on SQL server 2008 on the same box. Lately the box has seemed really slow. The only kind of discovery i could think of doing was looking in the task manager, where i can see w3wp and sqlserver.exe jumping to 40% cpu usage every 5-10 seconds. What are the steps i can take to determine which of my websites is taking these resources and or what database is getting hit the most? I have of course ssms installed on the machine as well. As you can tell, my sysadmin skills are very very limited - any help would be much appreciated.

    Read the article

  • Bitmapdata heavy usage - memory disaster (spark/FB4)

    - by keyle
    I've got a flex component which works pretty well but unfortunately turns into a disaster once used in a datagroup item renderer of about 40-50 items. Essentially it uses bitmapdata to take screenshot of a fully-rendered webpage in mx:HTML (this version of webkit rocks btw, miles better than flex 3). The code is pretty self-explanatory I think. http://noben.org/show/PageGrabber.mxml I've optimized it all I could, browsed, search for answers and already trimmed it down a lot, I'm desparate to reduce the memory usage (about 600mb after 100 draw) The Garbage collector has little effect. Thanks! Nic

    Read the article

  • Compress file to bytes for uploading to SQL Server

    - by Chris
    I am trying to zip files to an SQL Server database table. I can't ensure that the user of the tool has write priveledges on the source file folder so I want to load the file into memory, compress it to an array of bytes and insert it into my database. This below does not work. class ZipFileToSql { public event MessageHandler Message; protected virtual void OnMessage(string msg) { if (Message != null) { MessageHandlerEventArgs args = new MessageHandlerEventArgs(); args.Message = msg; Message(this, args); } } private int sourceFileId; private SqlConnection Conn; private string PathToFile; private bool isExecuting; public bool IsExecuting { get { return isExecuting; } } public int SourceFileId { get { return sourceFileId; } } public ZipFileToSql(string pathToFile, SqlConnection conn) { isExecuting = false; PathToFile = pathToFile; Conn = conn; } public void Execute() { isExecuting = true; byte[] data; byte[] cmpData; //create temp zip file OnMessage("Reading file to memory"); FileStream fs = File.OpenRead(PathToFile); data = new byte[fs.Length]; ReadWholeArray(fs, data); OnMessage("Zipping file to memory"); MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(data, 0, data.Length); cmpData = new byte[ms.Length]; ReadWholeArray(ms, cmpData); OnMessage("Saving file to database"); using (SqlCommand cmd = Conn.CreateCommand()) { cmd.CommandText = @"MergeFileUploads"; cmd.CommandType = CommandType.StoredProcedure; //cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = data; cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = cmpData; SqlParameter p = new SqlParameter(); p.ParameterName = "@SourceFileId"; p.Direction = ParameterDirection.Output; p.SqlDbType = SqlDbType.Int; cmd.Parameters.Add(p); cmd.ExecuteNonQuery(); sourceFileId = (int)p.Value; } OnMessage("File Saved"); isExecuting = false; } private void ReadWholeArray(Stream stream, byte[] data) { int offset = 0; int remaining = data.Length; float Step = data.Length / 100; float NextStep = data.Length - Step; while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; if (remaining < NextStep) { NextStep -= Step; } } } }

    Read the article

  • Database design for heavy timed data logging

    - by user293995
    Hi, I have an application where I receive each data 40.000 rows. I have 5 million rows to handle (500 Mb MySQL 5.0 database). Actually, thoses rows are stored in the same table = slow to update, hard to backup, ... Which king of scheme is used in such application to allow long term accessibility to the datas without problems with too big tables, easy backup, fast read / write ? Is postgresql better than mysql for such purpose ? Thanks in advance BEst regards

    Read the article

  • Saving "heavy" figure to PDF in MATLAB - rendering problem

    - by yuk
    I generate a figure in MATLAB with lot amount of points (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb. Try this simplified code, for example: x=linspace(1,10,100000); y=sin(x)+randn(size(x)); plot(x,y,'.') set(gcf,'Renderer','zbuffer') print -dpdf -r300 testpdf_zb set(gcf,'Renderer','painters') print -dpdf -r300 testpdf_pa set(gcf,'Renderer','opengl') print -dpdf -r300 testpdf_op The actual figure is much more complex with several axes and different types of plots. Is there a way to rasterize the figure, but keep text labels as vectors? Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Helper functions & prototype methods to replace heavy frameworks?

    - by Rob
    All frameworks aside, what are some of the common helper functions/prototype methods you use on a daily basis? Please note I am not arguing against frameworks. I've simply found that the majority of what I do on a daily basis can, most often, be done with a few dozen Array, String and Element.prototype methods. With the addition of a few helper functions like $ (getElementsById) and $$$ (getElementsByClass), I am able to satisfy some of the core benefits, however basic, of a much heavier framework. If you were to collect a small library of basic methods and functions to replace the core functionality of some of the popular frameworks, what would they be?

    Read the article

  • CentOS - Add additional hard drive raid arrays on Dell Perc 5/i card

    - by Quanano
    We have a Dell Poweredge 2900 system with Dell Perc 5/i card and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on one raid array on this controller with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l: Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v: [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod: Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log blkid: /dev/sda1: UUID="bc4777d9-ae2c-4c58-96ea-cedb342b8338" TYPE="ext4" /dev/sda2: UUID="j2wRZr-Mlko-QWBR-BndC-V2uN-vdhO-iKCuYu" TYPE="LVM2_member" /dev/mapper/vg_lal2server-lv_root: UUID="9238208a-1daf-4c3c-aa9b-469f0387ebee" TYPE="ext4" /dev/mapper/vg_lal2server-lv_swap: UUID="dbefb39c-5871-4bc9-b767-1ef18f12bd3d" TYPE="swap" /dev/mapper/vg_lal2server-lv_home: UUID="ec698993-08b7-443e-84f0-9f9cb31c5da8" TYPE="ext4" dmesg shows: megaraid_sas: fw state:c0000000 megasas: fwstate:c0000000, dis_OCR=0 scsi2 : LSI SAS based MegaRAID driver scsi 2:0:0:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:1:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:2:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:3:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:4:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:5:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:8:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 scsi 2:0:9:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 i.e. the 3 RAID Arrays Seagate Hitatchi and Fujitsu hard drives respectively. FURTHER UPDATE I have installed the megaraid storage manager console and connected to the server. It appears that the two CentOS installation hard drives are OK. The other 6 drives, one raid array of 4 and one raid array of 2 disks. The other drives are listed as (Foreign) Unconfigured Good.

    Read the article

  • Testing a Gui-heavy WPF application.

    - by Hamish Grubijan
    We (my colleagues) have a messy 12 y.o. mature app that is GUI-based, and the current plan is to add new dialogs & other GUI in WPF, as well as replace some of the older dialogs in WPF as well. At the same time we wish to be able to test that Monster - GUI automation in a maintainable way. Some challenges: The application is massive. It constantly gains new features. It is being changed around (bug fixes, patches). It has a back end, and a layer in-between. The state of it can get out of whack if you beat it to death. What we want is: Some tool that can automate testing of WPF. auto-discovery of what the inputs and the outputs of the dialog are. An old test should still work if you add a label that does nothing. It should fail, however, if you remove a necessary text field. It would be very nice if the test suite was easy to maintain, if it ran and did not break most of the time. Every new dialog should be created with testability in mind. At this point I do not know exactly what I want, so I am marking this as a community wiki. If having to test a huge GUI-based app rings the bell (even if not in WPF), then please share your good, bad and ugly experiences here.

    Read the article

  • Optimal Sharing of heavy computation job using Snow and/or multicore

    - by James
    Hi, I have the following problem. First my environment, I have two 24-CPU servers to work with and one big job (resampling a large dataset) to share among them. I've setup multicore and (a socket) Snow cluster on each. As a high-level interface I'm using foreach. What is the optimal sharing of the job? Should I setup a Snow cluster using CPUs from both machines and split the job that way (i.e. use doSNOW for the foreach loop). Or should I use the two servers separately and use multicore on each server (i.e. split the job in two chunks, run them on each server and then stich it back together). Basically what is an easy way to: 1. Keep communication between servers down (since this is probably the slowest bit). 2. Ensure that the random numbers generated in the servers are not highly correlated.

    Read the article

  • Some web pages (especially Apple documentation) cause heavy CPU usage in Windows IE8

    - by Mark Lutton
    Maybe this belongs in Server Fault instead, but some of you may have noticed this issue (particularly those developing on Mac, using a Windows machine to read the reference material). I posted the same question on a Microsoft forum and got one answer from someone who reproduced the problem, so it's not just my machine. No solution yet. Ever since this month's security updates, I find that many web pages cause the CPU to run at maximum for as long as the web page is visible. This happens in both IE7 and IE8 on at least three different computers (two with Windows XP, one with Vista). Here is one of the pages, running on XP with IE 8: http://learning2code.blogspot.com/2007/11/update-using-subversion-with-xcode-3.html Here is one that does it in Vista with IE8: http://developer.apple.com/iphone/library/documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/Reference/NSString.html You can leave the page open for hours and the CPU is still at high usage. This doesn't happen every time. It is not always reproduceable. Sometimes it is OK the second or third time it loads. In IE7 the high usage is in ieframe.dll, version 7.0.6000.16890. In IE8 the high usage is in iertutil.dll, version 8.0.6001.18806.

    Read the article

  • Saving "heavy" image to PDF in MATLAB - rendering problem

    - by yuk
    I generate a figure in MATLAB with lot amount of points (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb. Try this simplified code, for example: x=linspace(1,10,100000); y=sin(x)+randn(size(x)); plot(x,y,'.') set(gcf,'Renderer','zbuffer') print -dpdf -r300 testpdf_zb set(gcf,'Renderer','painters') print -dpdf -r300 testpdf_pa set(gcf,'Renderer','opengl') print -dpdf -r300 testpdf_op The actual figure is much more complex with several axes and different types of plots. Is there a way to rasterize the figure, but keep text labels as vectors? Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • Improve heavy work in a loop in multithreading

    - by xjaphx
    I have a little problem with my data processing. public void ParseDetails() { for (int i = 0; i < mListAppInfo.Count; ++i) { ParseOneDetail(i); } } For 300 records, it usually takes around 13-15 minutes. I've tried to improve by using Parallel.For() but it always stop at some point. public void ParseDetails() { Parallel.For(0, mListAppInfo.Count, i => ParseOneDetail(i)); } In method ParseOneDetail(int index), I set an output log for tracking the record id which is under processing. Always hang at some point, I don't know why.. ParseOneDetail(): 89 ... ParseOneDetail(): 90 ... ParseOneDetail(): 243 ... ParseOneDetail(): 92 ... ParseOneDetail(): 244 ... ParseOneDetail(): 93 ... ParseOneDetail(): 245 ... ParseOneDetail(): 247 ... ParseOneDetail(): 94 ... ParseOneDetail(): 248 ... ParseOneDetail(): 95 ... ParseOneDetail(): 99 ... ParseOneDetail(): 249 ... ParseOneDetail(): 100 ... _ <hang at this point> Appreciate your help and suggestions to improve this. Thank you! Edit 1: update for method: private void ParseOneDetail(int index) { Console.WriteLine("ParseOneDetail(): " + index + " ... "); ApplicationInfo appInfo = mListAppInfo[index]; var htmlWeb = new HtmlWeb(); var document = htmlWeb.Load(appInfo.AppAnnieURL); // get first one only HtmlNode nodeStoreURL = document.DocumentNode.SelectSingleNode(Constants.XPATH_FIRST); appInfo.StoreURL = nodeStoreURL.Attributes[Constants.HREF].Value; } Edit 2: This is the error output after a while running as Enigmativity suggest, ParseOneDetail(): 234 ... ParseOneDetail(): 87 ... ParseOneDetail(): 235 ... ParseOneDetail(): 236 ... ParseOneDetail(): 88 ... ParseOneDetail(): 238 ... ParseOneDetail(): 89 ... ParseOneDetail(): 90 ... ParseOneDetail(): 239 ... ParseOneDetail(): 92 ... Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.Net.WebException: The operation has timed out at System.Net.HttpWebRequest.GetResponse() at HtmlAgilityPack.HtmlWeb.Get(Uri uri, String method, String path, HtmlDocum ent doc, IWebProxy proxy, ICredentials creds) in D:\Source\htmlagilitypack.new\T runk\HtmlAgilityPack\HtmlWeb.cs:line 1355 at HtmlAgilityPack.HtmlWeb.LoadUrl(Uri uri, String method, WebProxy proxy, Ne tworkCredential creds) in D:\Source\htmlagilitypack.new\Trunk\HtmlAgilityPack\Ht mlWeb.cs:line 1479 at HtmlAgilityPack.HtmlWeb.Load(String url, String method) in D:\Source\htmla gilitypack.new\Trunk\HtmlAgilityPack\HtmlWeb.cs:line 1103 at HtmlAgilityPack.HtmlWeb.Load(String url) in D:\Source\htmlagilitypack.new\ Trunk\HtmlAgilityPack\HtmlWeb.cs:line 1061 at SimpleChartParser.AppAnnieParser.ParseOneDetail(ApplicationInfo appInfo) i n c:\users\nhn60\documents\visual studio 2010\Projects\FunToolPack\SimpleChartPa rser\AppAnnieParser.cs:line 90 at SimpleChartParser.AppAnnieParser.<ParseDetails>b__0(ApplicationInfo ai) in c:\users\nhn60\documents\visual studio 2010\Projects\FunToolPack\SimpleChartPar ser\AppAnnieParser.cs:line 80 at System.Threading.Tasks.Parallel.<>c__DisplayClass21`2.<ForEachWorker>b__17 (Int32 i) at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c() at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) at System.Threading.Tasks.Task.<>c__DisplayClass7.<ExecuteSelfReplicating>b__ 6(Object ) --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceled Exceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationTo ken cancellationToken) at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int 32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWit hState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](TSource[] ar ray, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Act ion`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithEveryt hing, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](IEnumerable` 1 source, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState , Action`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithE verything, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEach[TSource](IEnumerable`1 source, Act ion`1 body) at SimpleChartParser.AppAnnieParser.ParseDetails() in c:\users\nhn60\document s\visual studio 2010\Projects\FunToolPack\SimpleChartParser\AppAnnieParser.cs:li ne 80 at SimpleChartParser.Program.Main(String[] args) in c:\users\nhn60\documents\ visual studio 2010\Projects\FunToolPack\SimpleChartParser\Program.cs:line 15

    Read the article

  • Heavy use of templates for mobile platforms

    - by Chris P. Bacon
    I've been flicking through the book Modern C++ Design by Andrei Alexandrescu and it seems interesting stuff. However it makes very extensive use of templates and I would like to find out if this should be avoided if using C++ for mobile platform development (Brew MP, WebOS, iOS etc.) due to size considerations. In Symbian OS C++ the standard use of templates is discouraged, the Symbian OS itself uses them but using an idiom known as thin templates where the underlying implementation is done in a C style using void* pointers with a thin template layered on top of this to achieve type safety. The reason they use this idiom as opposed to regular use of templates is specifically to avoid code bloating. So what are opinions (or facts) on the use of templates when developing applications for mobile platforms.

    Read the article

  • Best Practice to write Connection string for heavy traffic ASP.NET Web Application

    - by hungrycoder
    What is the best way to define connection string for a web application that has minimum 500 live users on the internet in terms of connection pooling. And load factors to consider? Suppose I have connection string as follows: initial catalog=Northwind; Min Pool Size=20;Max Pool Size=500; data source=localhost; Connection Timeout=30; Integrated security=sspi" as Max Pool Size is 500 and as live users exceed 500 say 520 will the remaining 20 users experience slower page load?? Or what if I have connection string as follows which doesn't talks anything about pooling or Connection time out? How the application behaves then? initial catalog=Northwind; data source=localhost; Integrated security=sspi" I'm using "Using statements" however to access the MYSQL database.

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >