Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 177/238 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Protocol/Packet Design Questions

    - by cam
    I'm looking to a design a protocol for a client-server application and need some links to some resources that may help me. The big part is I'm trying to create my own "packet" format so I can minimize the amount of information being sent. I'm looking for some resources to dissect their protocol, but it seems some completely lack packet design, such as SMTP (which just sends strings terminated by CLRF). What are the advantages/disadvantages of using a system like SMTP over a system that uses a custom made packet? Couldn't SMTP use only a couple bytes to cover all commands through bit flags and save bandwidth/space? Just trying to get my head around all this.

    Read the article

  • Splitting string into array upon token

    - by Gnutt
    I'm writing a script to perform an offsite rsync backup, and whenever the rsyncline recieves some output it goes into a single variable. I then want to split that variable into an array upon the ^M token, so that I can send them to two different logger-sessions (so I get them on seperate lines in the log). My current line to perform the rsync result=rsync --del -az -e "ssh -i $cert" $source $destination 2>&1 Result in the log, when the server is unavailable ssh: connect to host offsite port 22: Connection timed out^M rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7]

    Read the article

  • In Perl, can I limit the length of a line as I read it in from a file (like fgets)

    - by SB
    I'm trying to write a piece of code that reads a file line by line and stores each line, up to a certain amount of input data. I want to guard against the end-user being evil and putting something like a gig of data on one line in addition to guarding against sucking in an abnormally large file. Doing $str = <FILE> will still read in a whole line, and that could be very long and blow up my memory. fgets lets me do this by letting me specify a number of bytes to read during each call and essentially letting me split one long line into my max length. Is there a similar way to do this in perl? I saw something about sv_gets but am not sure how to use it (though I only did a cursory Google search). Thanks.

    Read the article

  • What .NET UnmanagedType is Unicode (UTF-16)?

    - by Pat
    I am packing bytes into a struct, and some of them correspond to a Unicode string. The following works fine for an ASCII string: [StructLayout(LayoutKind.Sequential)] private struct PacketBytes { [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 64)] public string MyString; } I assumed that I could do [StructLayout(LayoutKind.Sequential)] private struct PacketBytes { [MarshalAs(UnmanagedType.LPWStr, SizeConst = 32)] public string MyString; } to make it Unicode, but that didn't work. (Since this field is part of a struct with other fields, which I've omitted for clarity, I can't simply change the CharSet of the containing struct.) Any idea what I'm doing wrong?

    Read the article

  • (iphone) maintaining CGContextRef or CGLayerRef is a bad idea?

    - by Eugene
    Hi, I need to work with many images, and I can't hold them as UIImage in memory because they are too big. I also need to change colors of image and merge them on the fly. Creating UIImage from underlying NSData, change color, and combine them when you can't have many images on memory is fairly slow. (as far as I can get) I thought maybe I can store underlying CGLayerRef(for image that will be combined) and CGContextRef(the resulting combined image). I am new to drawing world, and not sure if CGLayerRef or CGContextRef is smaller in memory than UIImage. I recently heard that w*h image takes up w*h*4 bytes in memory. Does CGLayerRef or CGContextRef also take up that much memory? Thank you

    Read the article

  • Extract data from uint8 to double

    - by HADJ AMOR HASSEN
    I have a C function receiving a uint8 pointer with another parameter which is its size (number of bytes). I want to extract double data from this buffer. Here is my code: Write(uint8* data, uint8 size) /* data and size are given by a callback to my function)*/ { double d; for (i = 0; i < size; i++) { d = ((double*)&data)[i]; printf(" d = %d\n"); } } The problem is that I am not receiving what I am sending within an external hardware. I guess that my cast is wrong. I tried other methods but without any good result. I am still not able to get what I send.

    Read the article

  • Operations on 64bit words in 32bit system

    - by Vilo
    I'm new here same as I'm new with assembly. I hope that you can help me to start. I'm using 32bit (i686) Ubuntu to make programs in assembly, using gcc compiler. I know that general-purpose-registers are 32bit (4 bytes) max, but what when I have to operate on 64 bit numbers? Intel's instruction says that higher bits are stored in %edx and lower in %eax Great... So how can I do something with this 2-registers number? I have to convert 64bit dec to bin, then save it to memory and show on the screen. How to make the 64bit quadword at start of the program in .data section?

    Read the article

  • What is/are the Scala way(s) to implement this Java "byte[] to Hex" class

    - by nicerobot
    I'm specifically interested in Scala (2.8) techniques for building strings with formats as well as interesting ways to make such a capability easily accessible where it's useful (lists of bytes, String, ...?).. public class Hex { public static String valueOf (final byte buf[]) { if (null == buf) { return null; } final StringBuilder sb = new StringBuilder(buf.length * 2); for (final byte b : buf) { sb.append(String.format("%02X", b & 0xff)); } return sb.toString(); } public static String valueOf (final Byteable o) { return valueOf(o.toByteArray()); } } This is only a learning exercise (so the utility and implementation of the Java isn't a concern.) Thanks

    Read the article

  • WAVEFORMATEX - how to read codecdata at the end??

    - by Roey
    Hi All. I've a WAVEFORMATEX struct with some codecdata at the end of it (10 bytes). I'm using C++. How do I access the data at the end? (this is a purely technical question). I tried : WAVEFORMATEX* wav = (WAVEFORMATEX*)pmt->pbFormat; WORD me = wav->cbSize; wav = wav + sizeof(WAVEFORMATEX); BYTE* arr = new BYTE[me]; memcpy(arr, (BYTE*)wav, me); Didnt work. Thanks Roey

    Read the article

  • C# Struct No Parameterless Constructor? See what I need to accomplish

    - by Changeling
    I am using a struct to pass to an unmanaged DLL as so - [StructLayout(LayoutKind.Sequential)] public struct valTable { public byte type; public byte map; public byte spare1; public byte spare2; public int par; public int min; public byte[] name; public valTable() { name = new byte[24]; } } The code above will not compile because VS 2005 will complain that "Structs cannot contain explicit parameterless constructors". In order to pass this struct to my DLL, I have to pass an array of struct's like so valTable[] val = new valTable[281]; What I would like to do is when I say new, the constructor is called and it creates an array of bytes like I am trying to demonstrate because the DLL is looking for that byte array of size 24 in each dimension. How can I accomplish this?

    Read the article

  • Using sizeof with a dynamically allocated array

    - by robUK
    Hello, gcc 4.4.1 c89 I have the following code snippet: #include <stdlib.h> #include <stdio.h> char *buffer = malloc(10240); /* Check for memory error */ if(!buffer) { fprintf(stderr, "Memory error\n"); return 1; } printf("sizeof(buffer) [ %d ]\n", sizeof(buffer)); However, the sizeof(buffer) always prints 4. I know that a char* is only 4 bytes. However, I have allocated the memory for 10kb. So shouldn't the size be 10240? I am wondering am I thinking right here? Many thanks for any suggestions,

    Read the article

  • C (or C++?) Syntax: STRUCTTYPE varname = {0};

    - by Jared Updike
    Normally one would declare/allocate a struct on the stack with?: STRUCTTYPE varname; What does this syntax mean in C (or is this C++ only, or perhaps specific to VC++)? STRUCTTYPE varname = {0}; where STRUCTTYPE is the name of a stuct type, like RECT or something. This code compiles and it seems to just zero out all the bytes of the struct but I'd like to know for sure if anyone has a reference. Also, is there a name for this construct?

    Read the article

  • Database table schema design - varchar(n). Suitable choice of N

    - by morpheous
    Coming from a C background, I may be getting too anal about this and worrying unnecessarily about bits and bytes here. Still, I cant help thinking how the data is actually stored and that if I choose an N which is easily factorizable into a power of 2, the database will be more effecient in how it packs data etc. Using this "logic", I have a string field in a table which is a variable length up to 21 chars. I am tempted to use 32 instead of 21, for the reason given above - however now I am thinking that I am wasting disk space because there will be space allocated for 11 extra chars that are guaranteed to be never used. Since I envisage storing several tens of thousands of rows a day, it all adds up. Question: Mindful of all of the above, Should I declare varchar(21) or varchar(32) and why?

    Read the article

  • Read a file to multiple array byte[]

    - by hankol
    I have an encryption algorithm (AES) that accepts file converted to array byte and encrypt it. Since I am going to process a very big size files, the JVM may go out of memory. I am planing to read the files in multiple array byte. each containing some part of the file. Then I teratively feed the algorithm. Finally merge them to produce encrypted file. So my question is: there any way to read a file part by part to multiple array byte? I thought I can use the following to read the file to array byte: IOUtils.toByteArray(InputStream input). And then split the array into multiple bytes using: Arrays.copyOfRange(). But I am afraid that the first code that reads file to byte will make the JVM to go out of memory. any suggestion please ? thanks

    Read the article

  • Continuously reading from a stream in C#?

    - by Damien Wildfire
    I have a Stream object that occasionally gets some data on it, but at unpredictable intervals. Messages that appear on the Stream are well-defined and declare the size of their payload in advance (the size is a 16-bit integer contained in the first two bytes of each message). I'd like to have a StreamWatcher class which detects when the Stream has some data on it. Once it does, I'd like an event to be raised so that a subscribed StreamProcessor instance can process the new message. Can this be done with C# events without using Threads directly? It seems like it should be straightforward, but I can't get quite get my head around the right way to design this.

    Read the article

  • What weird encoding is movistar Venezuela sms on blackberry using?

    - by cjp
    Send an sms using the normal GSM TextMessage API on the BlackBerry, get back garbage. It's not unicode, phone is set to 7-bit send. Byte size is only off by one. Is there some default crypto thing, or some weird encoding they use? This code works most everywhere else in the world; this definitely seems like a movistar problem. The string that comes back is random 7-bit ascii except for a few high order bytes. Needless to say the source input text is totally 7 bit chars, which should work in sms, ISO-8859 and look the same in UTF charsets. Anybody seen this or got sms working in code on movistar VZ blackberries?

    Read the article

  • gcc compiled binaries w/different sizes?

    - by BillTorpey
    If the same code is built at different times w/gcc, the resulting binary will have different contents. OK, I'm not wild about that, but that's what it is. However, I've recently run into a situation where the same code, built with the same version of gcc, is generating a binary with a different size than a prior build (by about 1900 bytes). Does anyone have any idea what may be causing either of these situations? Is this some kind of ELF issue? Are there any tools out there (other than ldd) that can be used to dump contents of binaries to see what exactly is different? Thanks in advance.

    Read the article

  • How much RAM used by Python dict or list?

    - by Who8MyLunch
    My problem: I am writing a simple Python tool to help me visualize my data as a function of many parameters. Each change in parameters involves a non-trivial amount of time, so I would like to cache each step's resulting imagery and supporting data in a dictionary. But then I worry that this dictionary could grow too large over time. Most of my data is in the form of Numpy arrays. My question: How would one go about computing the total number of bytes used by a Python dictionary. The dictionary itself may contain lists and other dictionaries, each of which contain data stored in Numpy arrays. Ideas?

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • What's the fastest way to get directory and subdirs size on unix using Perl?

    - by ivicas
    I am using Perl stat() function to get the size of directory and its subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • Increase a recive buffer in UDP socket

    - by unresolved_external
    I'wm writing an app, which transmits video and obviously uses UDP protocol fot this purpose. So I am wondering how can I increase a size of send/recieve buffer, cause currently the maximal size of data, which I can send is 65000 bytes. I already tried to do it in following way: int option = 262144; if(setsockopt(m_SocketHandle,SOL_SOCKET,SO_RCVBUF ,(char*)&option,sizeof(option)) < 0) { printf("setsockopt failed\n"); } But it did not work. So how can I do it?

    Read the article

  • Rewrite Registry File in Windows

    - by Vulcan Eager
    I have been trying to find a way to "defragment" the registry on my Windows machine. Firstly, does this make sense? Any benefits in doing this? (Not much love on superuser.com) Secondly, I am looking for a way to rewrite the registry using C/C++ with Windows API. Is there a way to read the registry and write it to a new file getting rid of unused bytes along the way? (I might have to write the new file and then boot into another OS/disk before I can overwrite the original... but I am willing to take that risk.)

    Read the article

  • How to switch data pins on/off on parallel port?

    - by Matt
    I want to simply switch certain data pins on and off, so that they can control a set of relays. I'm not asking about the hardware bit (should be easy), but I don't know where to begin writing the software. I don't want a high level library that can send bytes to a device - I literally want to switch on/off certain pins. I'm running Linux and I want to do this in Java, so would I just need a library? It would be nice if the library has good documentation and is easy to use, but if not then a short example code will help me get started.

    Read the article

  • How do i generate random data with RSA?

    - by acidzombie24
    After loading my RSACryptoServiceProvider rsa object i would like to create a key for my AES object. Since i dont need to store the AES key (i only need it to decrypt on my prv side) i figure i dont need to store it and i can generate it with my public key. I thought doing rsa.Encrypt(byte[] with 4 hardcoded bytes); would generate the data i need. It turns out everytime i call this function even with the same data i get different results. So theres no way for me to recreate the AES key if its different everytime. How can i generate data with RSA in a way that i can recreate anytime i need?

    Read the article

  • Sequential WSASend() calls - can I rely on TCP to put them on the wire in the posting order?

    - by Poni
    On Windows I/O completion ports, say I do this: void function() { WSASend("1111"); // A WSASend("2222"); // B WSASend("3333"); // C } If I got a "write-complete" that says 3 bytes of WSASend() A were sent, is it possible that right after that I'll get a "write-complete" that tells me that some or all of B & C were sent, or will TCP will hold them until I re-issue a WSASend() call with the rest of A's data? Or will TCP complete it automatically?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >