Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 80/320 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • C# need help debugging socks5-connection attemp

    - by Chuck
    Hi, I've written the following code to (successfully) connect to a socks5 proxy. I send a user/pw auth and get an OK reply (0x00), but as soon as I tell the proxy to connect to whichever ip:port, it gives me 0x01 (general error). Socket socket5_proxy = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint proxyEndPoint = new IPEndPoint(IPAddress.Parse("111.111.111.111"), 1080); // proxy ip, port. fake for posting purposes. socket5_proxy.Connect(proxyEndPoint); byte[] init_socks_command = new byte[4]; init_socks_command[0] = 0x05; init_socks_command[1] = 0x02; init_socks_command[2] = 0x00; init_socks_command[3] = 0x02; socket5_proxy.Send(init_socks_command); byte[] socket_response = new byte[2]; int bytes_recieved = socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] == 0x02) { byte[] temp_bytes; string socks5_user = "foo"; string socks5_pass = "bar"; byte[] auth_socks_command = new byte[3 + socks5_user.Length + socks5_pass.Length]; auth_socks_command[0] = 0x05; auth_socks_command[1] = Convert.ToByte(socks5_user.Length); temp_bytes = Encoding.Default.GetBytes(socks5_user); temp_bytes.CopyTo(auth_socks_command, 2); auth_socks_command[2 + socks5_user.Length] = Convert.ToByte(socks5_pass.Length); temp_bytes = Encoding.Default.GetBytes(socks5_pass); temp_bytes.CopyTo(auth_socks_command, 3 + socks5_user.Length); socket5_proxy.Send(auth_socks_command); socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] != 0x00) return; byte[] connect_socks_command = new byte[10]; connect_socks_command[0] = 0x05; connect_socks_command[1] = 0x02; // streaming connect_socks_command[2] = 0x00; connect_socks_command[3] = 0x01; // ipv4 temp_bytes = IPAddress.Parse("222.222.222.222").GetAddressBytes(); // target connection. fake ip, obviously temp_bytes.CopyTo(connect_socks_command, 4); byte[] portBytes = BitConverter.GetBytes(8888); connect_socks_command[8] = portBytes[0]; connect_socks_command[9] = portBytes[1]; socket5_proxy.Send(connect_socks_command); socket5_proxy.Receive(socket_response); if (socket_response[1] != 0x00) MessageBox.Show("Damn it"); // I always end here, 0x01 I've used this as a reference: http://en.wikipedia.org/wiki/SOCKS#SOCKS_5 Have I completely misunderstood something here? How I see it, I can connect to the socks5 fine. I can authenticate fine. But I/the proxy can't "do" anything? Yes, I know the proxy works. Yes, the target ip is available and yes the target port is open/responsive. I get 0x01 no matter what I try to connect to. Any help is VERY MUCH appreciated! Thanks, Chuck

    Read the article

  • Reading email address from contacts fails with weird memory issue - Solved

    - by CapsicumDreams
    Hi all, I'm stumped. I'm trying to get a list of all the email address a person has. I'm using the ABPeoplePickerNavigationController to select the person, which all seems fine. I'm setting my ABRecordRef personDealingWith; from the person argument to - (BOOL)peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person property:(ABPropertyID)property identifier:(ABMultiValueIdentifier)identifier { and everything seems fine up till this point. The first time the following code executes, all is well. When subsequently run, I can get issues. First, the code: // following line seems to make the difference (issue 1) // NSLog(@"%d", ABMultiValueGetCount(ABRecordCopyValue(personDealingWith, kABPersonEmailProperty))); // construct array of emails ABMultiValueRef multi = ABRecordCopyValue(personDealingWith, kABPersonEmailProperty); CFIndex emailCount = ABMultiValueGetCount(multi); if (emailCount > 0) { // collect all emails in array for (CFIndex i = 0; i < emailCount; i++) { CFStringRef emailRef = ABMultiValueCopyValueAtIndex(multi, i); [emailArray addObject:(NSString *)emailRef]; CFRelease(emailRef); } } // following line also matters (issue 2) CFRelease(multi); If compiled as written, the are no errors or static analysis problems. This crashes with a *** -[Not A Type retain]: message sent to deallocated instance 0x4e9dc60 error. But wait, there's more! I can fix it in either of two ways. Firstly, I can uncomment the NSLog at the top of the function. I get a leak from the NSLog's ABRecordCopyValue every time through, but the code seems to run fine. Also, I can comment out the CFRelease(multi); at the end, which does exactly the same thing. Static compilation errors, but running code. So without a leak, this function crashes. To prevent a crash, I need to haemorrhage memory. Neither is a great solution. Can anyone point out what's going on? Solution: It turned out that I wasn't storing the ABRecordRef personDealingWith var correctly. I'm still not sure how to do that properly, but instead of having the functionality in another routine (performed later), I'm now doing the grunt-work in the delegate method, and using the derived results at my leisure. The new (working) routine: - (BOOL)peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person { // as soon as they select someone, return personDealingWithFullName = (NSString *)ABRecordCopyCompositeName(person); personDealingWithFirstName = (NSString *)ABRecordCopyValue(person, kABPersonFirstNameProperty); // construct array of emails [personDealingWithEmails removeAllObjects]; ABMutableMultiValueRef multi = ABRecordCopyValue(person, kABPersonEmailProperty); if (ABMultiValueGetCount(multi) > 0) { // collect all emails in array for (CFIndex i = 0; i < ABMultiValueGetCount(multi); i++) { CFStringRef emailRef = ABMultiValueCopyValueAtIndex(multi, i); [personDealingWithEmails addObject:(NSString *)emailRef]; CFRelease(emailRef); } } CFRelease(multi); return NO; }

    Read the article

  • Why does BeginReceiveFrom never time out?

    - by James Hugard
    I am writing an asynchronous Ping using Raw Sockets in F#, to enable parallel requests using as few threads as possible ("System.Net.NetworkInformation.Ping" appears to use one thread per request, but have not tested this... also am interested in using F# async workflows). The synchronous version below correctly times out when the target host does not exist/respond, but the asynchronous version hangs. Both work when the host does respond... Any ideas? (note: the process must run as Admin for this code to work) This throws a timeout: let result = Ping.Ping ( IPAddress.Parse( "192.168.33.22" ), 1000 ) However, this hangs: let result = Ping.PingAsync ( IPAddress.Parse( "192.168.33.22" ), 1000 ) |> Async.RunSynchronously Here's the code... module Ping open System open System.Net open System.Net.Sockets open System.Threading //---- ICMP Packet Classes type IcmpMessage (t : byte) = let mutable m_type = t let mutable m_code = 0uy let mutable m_checksum = 0us member this.Type with get() = m_type member this.Code with get() = m_code member this.Checksum = m_checksum abstract Bytes : byte array default this.Bytes with get() = [| m_type m_code byte(m_checksum) byte(m_checksum >>> 8) |] member this.GetChecksum() = let mutable sum = 0ul let bytes = this.Bytes let mutable i = 0 // Sum up uint16s while i < bytes.Length - 1 do sum <- sum + uint32(BitConverter.ToUInt16( bytes, i )) i <- i + 2 // Add in last byte, if an odd size buffer if i <> bytes.Length then sum <- sum + uint32(bytes.[i]) // Shuffle the bits sum <- (sum >>> 16) + (sum &&& 0xFFFFul) sum <- sum + (sum >>> 16) sum <- ~~~sum uint16(sum) member this.UpdateChecksum() = m_checksum <- this.GetChecksum() type InformationMessage (t : byte) = inherit IcmpMessage(t) let mutable m_identifier = 0us let mutable m_sequenceNumber = 0us member this.Identifier = m_identifier member this.SequenceNumber = m_sequenceNumber override this.Bytes with get() = Array.append (base.Bytes) [| byte(m_identifier) byte(m_identifier >>> 8) byte(m_sequenceNumber) byte(m_sequenceNumber >>> 8) |] type EchoMessage() = inherit InformationMessage( 8uy ) let mutable m_data = Array.create 32 32uy do base.UpdateChecksum() member this.Data with get() = m_data and set(d) = m_data <- d this.UpdateChecksum() override this.Bytes with get() = Array.append (base.Bytes) (this.Data) //---- Synchronous Ping let Ping (host : IPAddress, timeout : int ) = let mutable ep = new IPEndPoint( host, 0 ) let socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let mutable buffer = packet.Bytes try if socket.SendTo( buffer, ep ) <= 0 then raise (SocketException()) buffer <- Array.create (buffer.Length + 20) 0uy let mutable epr = ep :> EndPoint if socket.ReceiveFrom( buffer, &epr ) <= 0 then raise (SocketException()) finally socket.Close() buffer //---- Entensions to the F# Async class to allow up to 5 paramters (not just 3) type Async with static member FromBeginEnd(arg1,arg2,arg3,arg4,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,iar,state)), endAction, ?cancelAction=cancelAction) static member FromBeginEnd(arg1,arg2,arg3,arg4,arg5,beginAction,endAction,?cancelAction): Async<'T> = Async.FromBeginEnd((fun (iar,state) -> beginAction(arg1,arg2,arg3,arg4,arg5,iar,state)), endAction, ?cancelAction=cancelAction) //---- Extensions to the Socket class to provide async SendTo and ReceiveFrom type System.Net.Sockets.Socket with member this.AsyncSendTo( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginSendTo, this.EndSendTo ) member this.AsyncReceiveFrom( buffer, offset, size, socketFlags, remoteEP ) = Async.FromBeginEnd( buffer, offset, size, socketFlags, remoteEP, this.BeginReceiveFrom, (fun asyncResult -> this.EndReceiveFrom(asyncResult, remoteEP) ) ) //---- Asynchronous Ping let PingAsync (host : IPAddress, timeout : int ) = async { let ep = IPEndPoint( host, 0 ) use socket = new Socket( AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Icmp ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout ) socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, timeout ) let packet = EchoMessage() let outbuffer = packet.Bytes try let! result = socket.AsyncSendTo( outbuffer, 0, outbuffer.Length, SocketFlags.None, ep ) if result <= 0 then raise (SocketException()) let epr = ref (ep :> EndPoint) let inbuffer = Array.create (outbuffer.Length + 256) 0uy let! result = socket.AsyncReceiveFrom( inbuffer, 0, inbuffer.Length, SocketFlags.None, epr ) if result <= 0 then raise (SocketException()) return inbuffer finally socket.Close() }

    Read the article

  • Parallelism implies concurrency but not the other way round right?

    - by Cedric Martin
    I often read that parallelism and concurrency are different things. Very often the answerers/commenters go as far as writing that they're two entirely different things. Yet in my view they're related but I'd like some clarification on that. For example if I'm on a multi-core CPU and manage to divide the computation into x smaller computation (say using fork/join) each running in its own thread, I'll have a program that is both doing parallel computation (because supposedly at any point in time several threads are going to run on several cores) and being concurrent right? While if I'm simply using, say, Java and dealing with UI events and repaints on the Event Dispatch Thread plus running the only thread I created myself, I'll have a program that is concurrent (EDT + GC thread + my main thread etc.) but not parallel. I'd like to know if I'm getting this right and if parallelism (on a "single but multi-cores" system) always implies concurrency or not? Also, are multi-threaded programs running on multi-cores CPU but where the different threads are doing totally different computation considered to be using "parallelism"?

    Read the article

  • zabbix monitoring mysql database

    - by krisdigitx
    I have a server running multiple instances of mysql and also has the zabbix-agent running. In zabbix_agentd.conf i have specified: UserParameter=multi.mysql[*],mysqladmin --socket=$1 -uzabbixagent extended-status 2>/dev/null | awk '/ $3 /{print $$4}' where $1 is the socket instance. From the zabbix server i can run the test successfully. zabbix_get -s ip_of_server -k multi.mysql[/var/lib/mysql/mysql2.sock] and it returns all the values However the zabbix item/trigger does not generate the graphs, I have created a MACRO for $1 which is the socket location {$MYSQL_SOCKET1} = '/var/lib/mysql/mysql2.sock' and i use this key in items to poll the value multi.mysql[{$MYSQL_SOCKET1},Bytes_sent] LOGS: this is what i get on the logs: 3360:20120214:144716.278 item [multi.mysql['/var/lib/mysql/mysql2.sock',Bytes_received]] error: Special characters '\'"`*?[]{}~$!&;()<>|#@' are not allowed in the parameters 3360:20120214:144716.372 item [multi.mysql['/var/lib/mysql/mysql2.sock',Bytes_sent]] error: Special characters '\'"`*?[]{}~$!&;()<>|#@' are not allowed in the parameters Any ideas where the problem could be? FIXED {$MYSQL_SOCKET1} = /var/lib/mysql/mysql2.sock i removed the single quotes from the line and it worked...

    Read the article

  • Flex HttpService POST limited to 543 Byte per Form field?

    - by motto
    Hi, I am getting a FaultEvent when trying to send form fields through HTTPService that contain more than 542 chars. Initializing the HttpService: httpServ = new HTTPService(); httpServ.method = 'POST'; httpServ.url = ENDPOINT_URL; //http://localhost:3001/ReportError.aspx httpServ.resultFormat = HTTPService.RESULT_FORMAT_TEXT; httpServ.contentType = HTTPService.CONTENT_TYPE_FORM; httpServ.addEventListener(ResultEvent.RESULT, OnErrorSent); httpServ.addEventListener(FaultEvent.FAULT, OnFault); Sending the request: var params:Object = {}; //params["stack"] = e.stackTrace.slice(0, 542); //length 542 = works //params["stack2"] = e.stackTrace.slice(1, 543); //length 542 = works (just to show that it's not about the content itself) params["stack3"] = e.stackTrace.slice(0, 543); //length 543 = fails I also seem to be able to create many form fields (with 542 length) so that it's not a limit of the request itself but of the form field: var params:Object = {}; params["stack"] = e.stackTrace.slice(0, 542); //length 542 params["stack2"] = e.stackTrace.slice(1, 543); //length 542 params["stack3"] = e.stackTrace.slice(2, 544); //length 542 // Length > 1600 chars The receiving party is an ASP.NET 4 site on the same domain and port. I hope someone already came across a similar restrictions or has some general advice on how to trace this problem down further. Thanks in advance.

    Read the article

  • How are Java ByteBuffer's limit and position variable's updated?

    - by Dummy Derp
    There are two scenarios: writing and reading Writing: Whenever I write something to the ByteBuffer by calling its put(byte[]) method the position variable is incremented as: current position + size of byte[] and limit stays at the max. If, however, I put the data in a view buffer then I will have to, manually, calculate and update the position Before I call the write(ByteBuffer) method of the channel to write something, I will have to flip() the Bytebuffer so that position points to zero and limit points to the last byte that was written to the ByteBuffer. Reading: Whenever I call the read(ByteBuffer) method of a channel to read something, the position variable stays at 0 and the limit variable of the ByteBuffer points to the last byte that was read. So, if the ByteBuffer is smaller than the file being read, the limit variable is pushed to max This means that the ByteBuffer is already flipped and I can proceed to extracting the values from the ByteBuffer. Please, correct me where I am wrong :)

    Read the article

  • NHibernate Pitfalls: Custom Types and Detecting Changes

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports the declaration of properties of user-defined types, that is, not entities, collections or primitive types. These are used for mapping a database columns, of any type, into a different type, which may not even be an entity; think, for example, of a custom user type that converts a BLOB column into an Image. User types must implement interface NHibernate.UserTypes.IUserType. This interface specifies an Equals method that is used for comparing two instances of the user type. If this method returns false, the entity is marked as dirty, and, when the session is flushed, will trigger an UPDATE. So, in your custom user type, you must implement this carefully so that it is not mistakenly considered changed. For example, you can cache the original column value inside of it, and compare it with the one in the other instance. Let’s see an example implementation of a custom user type that converts a Byte[] from a BLOB column into an Image: 1: [Serializable] 2: public sealed class ImageUserType : IUserType 3: { 4: private Byte[] data = null; 5: 6: public ImageUserType() 7: { 8: this.ImageFormat = ImageFormat.Png; 9: } 10: 11: public ImageFormat ImageFormat 12: { 13: get; 14: set; 15: } 16: 17: public Boolean IsMutable 18: { 19: get 20: { 21: return (true); 22: } 23: } 24: 25: public Object Assemble(Object cached, Object owner) 26: { 27: return (cached); 28: } 29: 30: public Object DeepCopy(Object value) 31: { 32: return (value); 33: } 34: 35: public Object Disassemble(Object value) 36: { 37: return (value); 38: } 39: 40: public new Boolean Equals(Object x, Object y) 41: { 42: return (Object.Equals(x, y)); 43: } 44: 45: public Int32 GetHashCode(Object x) 46: { 47: return ((x != null) ? x.GetHashCode() : 0); 48: } 49: 50: public override Int32 GetHashCode() 51: { 52: return ((this.data != null) ? this.data.GetHashCode() : 0); 53: } 54: 55: public override Boolean Equals(Object obj) 56: { 57: ImageUserType other = obj as ImageUserType; 58: 59: if (other == null) 60: { 61: return (false); 62: } 63: 64: if (Object.ReferenceEquals(this, other) == true) 65: { 66: return (true); 67: } 68: 69: return (this.data.SequenceEqual(other.data)); 70: } 71: 72: public Object NullSafeGet(IDataReader rs, String[] names, Object owner) 73: { 74: Int32 index = rs.GetOrdinal(names[0]); 75: Byte[] data = rs.GetValue(index) as Byte[]; 76: 77: this.data = data as Byte[]; 78: 79: if (data == null) 80: { 81: return (null); 82: } 83: 84: using (MemoryStream stream = new MemoryStream(this.data ?? new Byte[0])) 85: { 86: return (Image.FromStream(stream)); 87: } 88: } 89: 90: public void NullSafeSet(IDbCommand cmd, Object value, Int32 index) 91: { 92: if (value != null) 93: { 94: Image data = value as Image; 95: 96: using (MemoryStream stream = new MemoryStream()) 97: { 98: data.Save(stream, this.ImageFormat); 99: value = stream.ToArray(); 100: } 101: } 102: 103: (cmd.Parameters[index] as DbParameter).Value = value ?? DBNull.Value; 104: } 105: 106: public Object Replace(Object original, Object target, Object owner) 107: { 108: return (original); 109: } 110: 111: public Type ReturnedType 112: { 113: get 114: { 115: return (typeof(Image)); 116: } 117: } 118: 119: public SqlType[] SqlTypes 120: { 121: get 122: { 123: return (new SqlType[] { new SqlType(DbType.Binary) }); 124: } 125: } 126: } In this case, we need to cache the original Byte[] data because it’s not easy to compare two Image instances, unless, of course, they are the same.

    Read the article

  • Oh no! My padding's invalid!

    - by Simon Cooper
    Recently, I've been doing some work involving cryptography, and encountered the standard .NET CryptographicException: 'Padding is invalid and cannot be removed.' Searching on StackOverflow produces 57 questions concerning this exception; it's a very common problem encountered. So I decided to have a closer look. To test this, I created a simple project that decrypts and encrypts a byte array: // create some random data byte[] data = new byte[100]; new Random().NextBytes(data); // use the Rijndael symmetric algorithm RijndaelManaged rij = new RijndaelManaged(); byte[] encrypted; // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); encrypted = encryptedStream.ToArray(); } byte[] decrypted; // and decrypt it again using (var decryptor = rij.CreateDecryptor()) using (CryptoStream crypto = new CryptoStream( new MemoryStream(encrypted), decryptor, CryptoStreamMode.Read)) { byte[] decrypted = new byte[data.Length]; crypto.Read(decrypted, 0, decrypted.Length); } Sure enough, I got exactly the same CryptographicException when trying to decrypt the data even in this simple example. Well, I'm obviously missing something, if I can't even get this single method right! What does the exception message actually mean? What am I missing? Well, after playing around a bit, I discovered the problem was fixed by changing the encryption step to this: // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) { using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); } encrypted = encryptedStream.ToArray(); } Aaaah, so that's what the problem was. The CryptoStream wasn't flushing all it's data to the MemoryStream before it was being read, and closing the stream causes it to flush everything to the backing stream. But why does this cause an error in padding? Cryptographic padding All symmetric encryption algorithms (of which Rijndael is one) operates on fixed block sizes. For Rijndael, the default block size is 16 bytes. This means the input needs to be a multiple of 16 bytes long. If it isn't, then the input is padded to 16 bytes using one of the padding modes. This is only done to the final block of data to be encrypted. CryptoStream has a special method to flush this final block of data - FlushFinalBlock. Calling Stream.Flush() does not flush the final block, as you might expect. Only by closing the stream or explicitly calling FlushFinalBlock is the final block, with any padding, encrypted and written to the backing stream. Without this call, the encrypted data is 16 bytes shorter than it should be. If this final block wasn't written, then the decryption gets to the final 16 bytes of the encrypted data and tries to decrypt it as the final block with padding. The end bytes don't match the padding scheme it's been told to use, therefore it throws an exception stating what is wrong - what the decryptor expects to be padding actually isn't, and so can't be removed from the stream. So, as well as closing the stream before reading the result, an alternative fix to my encryption code is the following: // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); // explicitly flush the final block of data crypto.FlushFinalBlock(); encrypted = encryptedStream.ToArray(); } Conclusion So, if your padding is invalid, make sure that you close or call FlushFinalBlock on any CryptoStream performing encryption before you access the encrypted data. Flush isn't enough. Only then will the final block be present in the encrypted data, allowing it to be decrypted successfully.

    Read the article

  • How can I decode UTF-16 data in Perl when I don't know the byte order?

    - by Geo
    If I open a file ( and specify an encoding directly ) : open(my $file,"<:encoding(UTF-16)","some.file") || die "error $!\n"; while(<$file>) { print "$_\n"; } close($file); I can read the file contents nicely. However, if I do: use Encode; open(my $file,"some.file") || die "error $!\n"; while(<$file>) { print decode("UTF-16",$_); } close($file); I get the following error: UTF-16:Unrecognised BOM d at F:/Perl/lib/Encode.pm line 174 How can I make it work with decode?

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Are UTF16 (as used by for example wide-winapi functions) characters always 2 byte long?

    - by Cray
    Please clarify for me, how does UTF16 work? I am a little confused, considering these points: There is a static type in C++, WCHAR, which is 2 bytes long. (always 2 bytes long obvisouly) Most of msdn and some other documentation seem to have the assumptions that the characters are always 2 bytes long. This can just be my imagination, I can't come up with any particular examples, but it just seems that way. There are no "extra wide" functions or characters types widely used in C++ or windows, so I would assume that UTF16 is all that is ever needed. To my uncertain knowledge, unicode has a lot more characters than 65535, so they obvisouly don't have enough space in 2 bytes. UTF16 seems to be a bigger version of UTF8, and UTF8 characters can be of different lengths. So if a UTF16 character not always 2 bytes long, how long else could it be? 3 bytes? or only multiples of 2? And then for example if there is a winapi function that wants to know the size of a wide string in characters, and the string contains 2 characters which are each 4 bytes long, how is the size of that string in characters calculated? Is it 2 chars long or 4 chars long? (since it is 8 bytes long, and each WCHAR is 2 bytes)

    Read the article

  • How can I tell if a byte array has already been compressed?

    - by MikeG
    Hi, Can I rely on the first few bytes of data compressed using the System.IO.Compression.DeflateStream in .NET always being the same? These bytes seem to always be the 1st bytes: 237, 189, 7, 96, 28, 73, 150, 37, 38, 47 , ... I'm assuming this is some kind of header, I'd like to assume that this header is fixed and isn't going to change. Has anyone got any extra info about this? Background info (The reason I want to know this info is...) I have a load of data in a database table that could do with being made smaller. I've decided I'm going to start compressing the data and not going to bother compressing the existing data. When the data gets into my .NET code the data is a String. I'd like to be able to look at the 1st few bytes of the string and see if it has been compressed, if it has then I need to de-compress it. I was originally thinking I could convert the string to bytes and just try de-compressing the data. Then if an exception happens, I could just assume it wasn't compressed. But I think checking the header bytes would give me much better performance. Many thanks, Mike G

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • An easy way to replace fread()'s with reading from a byte array?

    - by Sam Washburn
    I have a piece of code that needs to be run from a restricted environment that doesn't allow stdio (Flash's Alchemy compiler). The code uses standard fopen/fread functions and I need to convert it to read from a char* array. Any ideas on how to best approach this? Does a wrapper exist or some library that would help? Thanks! EDIT: I should also mention that it's reading in structs. Like this: fread(&myStruct, 1, sizeof(myStruct), f);

    Read the article

  • Mallocing an unsigned char array to store ints

    - by Max Desmond
    I keep getting a segmentation fault when i test the following code. I am currently unable to find an answer after having searched the web. a = (byte *)malloc(sizeof(byte) * x ) ; for( i = 0 ; i < x-1 ; i++ ) { scanf("%d", &y ) ; a[i] = y ; } Both y and x are initialized. X is the size of the array determined by the user. The segmentation fault is on the second to last integer to be added, i found this by adding printf("roar") ; before setting a[i] to y and entering one number at a time. Byte is a typedef of an unsigned char. Note: I've also tried using a[i] = (byte)y ; A is ininitalized as follows byte *a ; If you need to view the entire code it is this: #include <stdio.h> #include <stdlib.h> #include "sort.h" int p_cmp_f () ; int main( int argc, char *argv[] ) { int x, y, i, choice ; byte *a ; while( choice !=2 ) { printf( "Would you like to sort integers?\n1. Yes\n2. No\n" ) ; scanf("%d", &choice ) ; switch(choice) { case 1: printf( "Enter the length of the array: " ) ; scanf( "%d", &x ) ; a = (byte *)malloc(sizeof( byte ) * x ) ; printf( "Enter %d integers to add to the array: ", x ) ; for( i = 0 ; i < x -1 ; i++ ) { scanf( "%d", &y ) ; a[i] = y ; } switch( choice ) { case 1: bubble_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 2: selection_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x; i++ ) printf( "%d", a[i] ; break ; case 3: insertion_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 4: merge_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 5: quick_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; default: printf("Enter either 1,2,3,4, or 5" ) ; break ; } case 2: printf( "Thank you for using this program\n" ) ; return 0 ; break ; default: printf( "Enter either 1 or 2: " ) ; break ; } } free(a) ; return 0 ; } int p_cmp_f( byte *element1, byte *element2 ) { return *((int *)element1) - *((int *)element2) ; }

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >