Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 127/173 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • What is the best way to maintain an entity's original properties when they are not included in MVC binding from edit page?

    - by kingdango
    I have an ASP.NET MVC view for editing a model object. The edit page includes most of the properties of my object but not all of them -- specifically it does not include CreatedOn and CreatedBy fields since those are set upon creation (in my service layer) and shouldn't change in the future. Unless I include these properties as hidden fields they will not be picked up during Binding and are unavailable when I save the modified object in my EF 4 DB Context. In actuality, upon save the original values would be overwritten by nulls (or some type-specific default). I don't want to drop these in as hidden fields because it is a waste of bytes and I don't want those values exposed to potential manipulation. Is there a "first class" way to handle this situation? Is it possible to specify a EF Model property is to be ignored unless explicitly set?

    Read the article

  • why can't I call .update on a MessageDigest instance

    - by Arthur Ulfeldt
    when i run this from the repl: (def md (MessageDigest/getInstance "SHA-1")) (. md update (into-array [(byte 1) (byte 2) (byte 3)])) I get: No matching method found: update for class java.security.MessageDigest$Delegate the Java 6 docs for MessageDigest show: update(byte[] input) Updates the digest using the specified array of bytes. and the class of (class (into-array [(byte 1) (byte 2) (byte 3)])) is [Ljava.lang.Byte; Am I missing something in the definition of update? Not creating the class I think I am? Not passing it the type I think I am?

    Read the article

  • A 4-byte Unsigned Int for Sql Server 2008?

    - by Jeff Meatball Yang
    I understand there are multiple questions about this on SO, but I have yet to find a definitive answer of "yes, here's how..." So here it is again: What are the possible ways to store an unsigned integer value (32-bit value or 32-bit bitmap) into a 4-byte field in SQL Server? Here are ideas I have seen: 1) Use a -1*2^31 offset for all values Disadvantages: need to perform math on the values before reading/writing/aggregating. 2) Use 4 tinyint fields Disadvantages: need to concatenate values to perform any operations 3) Use binary(4) Disadvantages: actually uses 4 + 2 bytes of space

    Read the article

  • Column.DbType affecting runtime behavior

    - by leppie
    Hi According to the MSDN docs, the DbType property/attribute of a Column type/element is only used for database creation. Yet, today, when trying to submit data to an IMAGE column on a SQLCE database (not sure if only on CE), I got an exception of 'Data truncated to 8000 bytes'. This was due to the DbType still being defined as VARBINARY(MAX) which SQLCE does not support. Changing the type to IMAGE in the DbType fixes the issue. So what other surprises does Linq2SQL attributes hold in store? Is this a bug or intended? Should I report it to MS? UPDATE After getting the answer from Guffa, I tested it, but it seems for NVARCHAR(10) adding a 11 char length string causes a SQL exception, and not Linq2SQL one. The data was truncated while converting from one data type to another. [ Name of function(if known) = ] A first chance exception of type 'System.Data.SqlServerCe.SqlCeException' occurred in System.Data.SqlServerCe.dll

    Read the article

  • How to receive packets on the MCU's serial port?

    - by itisravi
    Hello, Consider this code running on my microcontroller unit(MCU): while(1){ do_stuff; if(packet_from_PC) send_data_via_gpio(new_packet); //send via general purpose i/o pins else send_data_via_gpio(default_packet); do_other_stuff; } The MCU is also interfaced to a PC via a UART.Whenever the PC sends data to the MCU, the *new_packet* is sent, otherwise the *default_packet* is sent.Each packet can be 5 or more bytes with a pre defined packet structure. My question is: 1.Should i receive the entire packet from PC using inside the UART interrut service routine (ISR)? In this case, i have to implement a state machine inside the ISR to assemble the packet (which can be lengthy with if-else or switch-case blocks). 2.Detect a REQUEST command (one byte)from the PC in my ISR set a flag, diable UART interrupt alone and form the packet in my while(1) loop by polling the UART?

    Read the article

  • The shortest way to convert infix expressions to postfix (RPN) in C

    - by kuszi
    Original formulation is given here (you can try also your program for correctness) . Additional rules: 1. The program should read from standard input and write do standard output. 2. The program should return zero to the calling system/program. 3. The program should compile and run with gcc -O2 -lm -s -fomit-frame-pointer. The challenge has some history: the call for short implementations has been announced at the Polish programming contest blog in September 2009. After the contest, the shortest code was 81 chars long. Later on the second call has been made for even shorter code and after the year matix2267 published his solution in 78 bytes: main(c){read(0,&c,1)?c-41&&main(c-40&&(c%96<27||main(c),putchar(c))):exit(0);} Anyone to make it even shorter or prove this is impossible?

    Read the article

  • Linux, C++ audio capturing (just microphone) library

    - by TheOm3ga
    I'm developing a musical game, it's like a singstar but instead of singing, you have to play the recorder. It's called oFlute, and it's still in early development stage. In the game, I capture the microphone input, then run a simple FFT analysis and compare the results to typical recorder's frequencies, thus getting the played note. At the beginning, the audio library I was using was RtAudio, but I don't remember why I switched to PortAudio, which is what I'm currently using. The problem is that, from time to time, either it crashes randomly or stops capturing, like if there were no sound coming from the microphone. My question is, what's the best option to capture microphone input on Linux? I just need to open, read, and close a flow of bytes from the microphone. I've been reading this guide, and (un)surprisingly it says: I don't think that PortAudio is very good API for Unix-like operating systems. So, what do you recommend me?

    Read the article

  • Foraward SNMP requests from Agentx Master to Agentx Subagent

    - by Nadia
    I am running an agentx master and an agentx subagent on linux. When I run snmpget on a default MIB i.e. sysdescr.0 it returns fine, but when I request for a MIB that was registered through the agentx subagent it timesout. It appears that the master receives the GET request but does not forward on to the agentx subagent. The MIB is registered successfully but when master agentx receives the GET request it saying "Sending 60 bytes to UDP: unknown". It can't find the location to forward to. Am I missing a configuration of some sort on the subagent side? How does the master know who is suppose to receive the requests?

    Read the article

  • write image file larger than 4096

    - by ntan
    Hi, *************EDIT********** i am using ODBC and found that can not read more than 4096 for a field Any suggestions *************EDIT************ i am reading an image from db $image=$row["image-contents"]; Now try to write the file to disk $image_name="test.jpg"; $file = fopen( "images/".$image_name, "w" ); fwrite( $file, $image); fclose( $file ); The problem is that the file created is only 4096 bytes and the image file is corrupt because $image is larger than 4096. I now that fwrite use blocks for write but i dont know how do it. Help plz!

    Read the article

  • How to transfer large files from desktop to server ( .NET)

    - by rahulchandran
    I am writing a .NET 2.0 based desktop client that will send large files ( well largish under 2GB) to a server. Need to develop the server as well. Server can be on any technology It should be secure so an underlying SSL stream is needed What are my options. Any obvious caveats etc I should be aware of To my mind the simplest solution is to open a tcp\ip connection over SSL to the server and send n packets each of size M bytes and then have the server append the chunks to the file and finally send an EOF packet as well IS this horrible. Will the perf suck on the server with all these disk writes What are any other clever options. I am limited to .NET 2.0 on the client if I did move to a WCF client will it buy be something magical and cool for this scenario Thanks

    Read the article

  • Converting c pointer types

    - by bobbyb
    I have a c pointer to a structre type called uchar4 which looks like { uchar x; uchar y; uchar z; uchar w; } I also have data passed in as uint8*. I'd like to create a uchar* pointing to the data at the uint8* so I've tried doing this: uint8 *data_in; uchar4 *temp = (uchar4*)data_in; However, the first 8 bytes always seem to be wrong. Is there another way of doing this?

    Read the article

  • java memory usage

    - by xdevel2000
    I know I always post a similar question about array memory usage but now I want post the question more specific. After I read this article: http://www.javamex.com/tutorials/memory/object_memory_usage.shtml I didn't understand some things: the size of a data type is always the same also on different platform (Linux / Windows 32 / 64 bit)??? so an int will be always 32 bit?; when I compute the memory usage I must put also the reference value itself? If I have an object to a class that has an int field its memory will be 12 (object header) + 4 reference + 4 (the int field) + 3 (padding) = 24 bytes??

    Read the article

  • Playback audio data with GWT

    - by Henrik
    I am creating a GWT client application which interacts with a server and I am getting all my response data from the server in JSON format. Amongst others there are wave data on the server's database which I would like to retrieve and then playback on the client. I am able to get the wave data as an array of bytes in the JSON format. My problem is, how do I playback the wave array data in a browser? Is it even possible or do I have to find another solution? I've searched the web and found some GWT packages which are able to playback sound, but they are all playing back directly from an url.

    Read the article

  • How to receive a datastream from a device on your computer, in C#

    - by WebDevHobo
    I plan to build a small audio-recorder app in C#. My laptop has a built in Microphone that's always active, so I want to use that as an early-stage test. I would simply start recording, save the file as a .wav or even use the LAME dll to make it into an MP3. The problem is, I don't know how to contact that microphone. Do I use a library that can detect a device, or do I just catch a stream of bytes from the port that the device is on? I don't have any experience with receiving data from connected devices. I suppose that I'll need to enter all the data into a byte array and then Serialize that into a WAV file, but I'm not sure. Can I get some pointers on this subject?

    Read the article

  • Codesample with bufferoverflow (gets method). Why does it not behave as expected?

    - by citronas
    This an extract from an c program that should demonstrate a bufferoverflow. void foo() { char arr[8]; printf(" enter bla bla bla"); gets(arr); printf(" you entered %s\n", arr); } The question was "How many input chars can a user maximal enter without a creating a buffer overflow" My initial answer was 8, because the char-array is 8 bytes long. Although I was pretty certain my answer was correct, I tried a higher amount of chars, and found that the limit of chars that I can enter, before I get a segmentation fault is 11. (Im running this on A VirtualBox Ubuntu) So my question is: Why is it possible to enter 11 chars into that 8 byte array?

    Read the article

  • Length of data returned from CGImageGetDataProvider is larger than expected

    - by jcoplan
    I'm loading a grayscale png image and I want to access the underlying pixel data. However after I load get the pixel data via CGImageGetDataProvider, the length of the data returned is longer than expected. CCGDataProviderRef provider = CGDataProviderCreateWithFilename(cStr); CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, FALSE, kCGRenderingIntentDefault); mapWidth = CGImageGetWidth(image); mapHeight = CGImageGetHeight(image); lookupMap = CGDataProviderCopyData(CGImageGetDataProvider(image)); mapWidth comes out to 1804 and mapHeight comes out to 1005. The product of which is 1813020 When I call CFDataGetLength(lookupMap) the response is 1833120. Where are these extra 20100 bytes coming from? Any help here is much appreciated. Am I missing something about the underlying format of the image?

    Read the article

  • how to write floating value accurately to a bin file.

    - by user319873
    Hi I am trying to dump the floating point values from my program to a bin file. Since I can't use any stdlib function, I am thinking of writting it char by char to a big char array which I am dumping in my test application to a file. It's like float a=3132.000001; I will be dumping this to a char array in 4 bytes. Code example would be:- if((a < 1.0) && (a > 1.0) || (a > -1.0 && a < 0.0)) a = a*1000000 // 6 bit fraction part. Can you please help me writting this in a better way.

    Read the article

  • write() in sys/uio.h returns -1

    - by fredrik
    I'm using Ubuntu Server 9.10 AMD Phenom 2 cpu g++ (Ubuntu 4.4.1-4ubuntu9) 4.4.1 trying to run the application pftp-shit v 1.11. The following code in tcp.cc is executed successfully: int outfile_fd = open(name, O_CREAT | O_TRUNC | O_RDWR | O_BINARY)) which returns a file descriptor int (in my case 6) - name is a char array containing a valid path to my file which successfully i created. and successfully running: fchmod(outfile_fd, S_IRUSR | S_IWUSR); and access(name, W_OK) The issue occurs during running the function (from sys/uio.h) write(outfile_fd, this-control_buffer, read_length) which returns -1. -1 is of returned if nothing was written and otherwise a non-negative integer is returned which is equal to the number of bytes written. Anyone having a clue how I can get the write function to work?

    Read the article

  • Seeking not working in HTML5 audio tag

    - by lord_wilmore
    I have a lighttpd server running locally. If I load a static file on the server (through an html5 audio tag), it plays and seeks fine. However, seeking doesn't work when running a dev server (web.py/CherryPy) or if I return the bytes via a defined action url instead of as a static file. It won't load the duration either. According to the "HTTP byte range requests" section in this Opera Page it's something to do with support for byte range requests/partial content responses. The content is treated as streaming instead. What I don't understand is: If the browser has the whole file downloaded surely it can display the duration, and surely it can seek. What I need to do on the web server to enable byte range requests (for non-static urls). Any advice would be most gratefully received.

    Read the article

  • printing double in binary

    - by Happy Mittal
    In Thinking in C++ by Bruce eckel, there is a program given to print a double value in binary.(Chapter 3, page no. 189) int main(int argc, char* argv[]) { if(argc != 2) { cout << "Must provide a number" << endl; exit(1); } double d = atof(argv[1]); unsigned char* cp = reinterpret_cast<unsigned char*>(&d); for(int i = sizeof(double); i > 0 ; i -= 2) { printBinary(cp[i-1]); printBinary(cp[i]); } } Here while printing cp[i] when i=8(assuming double is of 8 bytes), wouldn't it be undefined behaviour? I mean this code doesn't work as it doesn't print cp[0].

    Read the article

  • python input UnicodeDecodeError:

    - by The man on the Clapham omnibus
    python 3.x >>> a = input() hope >>> a 'hope' >>> b = input() håpe >>> b 'håpe' >>> c = input() start typing hå... delete using backspace... and change to hope Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 1: invalid continuation byte >>> The situation is not terrible, I am working around it, but find it strange that when deleting, the bytes get messed up. Has anyone else experienced this? the terminal history shows that I thought that I entered h?ope any ideas? in the script that is using this, I do import readline to give command line history.

    Read the article

  • When will a TCP network packet be fragmented at the application layer?

    - by zooropa
    When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • MS SQL 2005 - Understanding ouput of DBCC SHOWCONTIG

    - by user169743
    I'm seeing some slow performance on a MS SQL 2005 database. I've been doing some research regarding MS SQL performance but I'm having difficulty fully understanding the output of SHOWCONTIG and would be very grateful if someone could have a look and offer some suggestions to improve performance. TABLE level scan performed. Pages Scanned................................: 19348 Extents Scanned..............................: 2427 Extent Switches..............................: 3829 Avg. Pages per Extent........................: 8.0 Scan Density [Best Count:Actual Count].......: 63.16% [2419:3830] Logical Scan Fragmentation ..................: 8.40% Extent Scan Fragmentation ...................: 35.15% Avg. Bytes Free per Page.....................: 938.1 Avg. Page Density (full).....................: 88.41%

    Read the article

  • c++ normalizing data sizes across systems

    - by Bocochoco
    I have a struct with three variables: two unsigned ints and an unsigned char. From my understanding, a c++ char is always 1 byte regardless of what operating system it is on. The same can't be said for other datatypes. I am looking for a way to normalize POD's so that when saved into a binary file, the resulting file is readable on any operating system that the code is compiled for. I changed my struct to use a 1-byte alignment by adding #pragma as follows: #pragma pack(push, 1) struct test { int a; } #pragma pack(pop) but that doesn't necessarily mean that int a is exactly 4 bytes on every os, I don't think? Is there a way to ensure that a file saved from my code will always be readable?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >