Search Results

Search found 3443 results on 138 pages for 'byte'.

Page 10/138 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • C# : Attachment from byte array?

    - by JL
    I have a byte[] which is a file, and I would like to send it as an attachment using System.Net.Mail. I noticed the attachment class has 1 overload which accepts a stream. Attachment att = new Attachment(Stream contentStream,string name); Is it possible to pass the byte[] through this overload?

    Read the article

  • Write string to fixed-length byte array in C#

    - by toasteroven
    somehow couldn't find this with a google search, but I feel like it has to be simple...I need to convert a string to a fixed-length byte array, e.g. write "asdf" to a byte[20] array. the data is being sent over the network to a c++ app that expects a fixed-length field, and it works fine if I use a BinaryWriter and write the characters one by one, and pad it by writing '\0' an appropriate number of times. is there a more appropriate way to do this?

    Read the article

  • Take a string to a byte[]

    - by Vaccano
    I have a string in my database that represents an image. It looks like this: 0x89504E470D0A1A0A0000000D49484452000000F00000014008020000000D8A66040.... <truncated for brevity> When I load it in from the database it comes in as a byte[]. How can I convert the string value to a byte array myself. (I am trying to remove the db for some testing code.)

    Read the article

  • BYTE typedef in VC++ and windows.h

    - by jules
    Hi, I am using Visual C++, and I am trying to include a file that uses BYTE (as well as DOUBLE, LPCONTEXT...) , which by default is not a defined type. If I include windows.h, it works fine, but windows.h also defines GetClassName wich I don't need. I am looking for an alternative to windows.h include, that would work with VC++ and would define most of the types like BYTE, DOUBLE ... Thanks

    Read the article

  • Convert 2 bytes to a number

    - by Vaccano
    I have a control that has a byte array in it. Every now and then there are two bytes that tell me some info about number of future items in the array. So as an example I could have: ... ... Item [4] = 7 Item [5] = 0 ... ... The value of this is clearly 7. But what about this? ... ... Item [4] = 0 Item [5] = 7 ... ... Any idea on what that equates to (as an normal int)? I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte). Is there any way to know this with out testing? Note: I am using C# 3.0 and visual studio 2008

    Read the article

  • Difference between C# and java big endian bytes using miscutil

    - by Eric Hauser
    I'm using the miscutil library to communicate between and Java and C# application using a socket. I am trying to figure out the difference between the following code (this is Groovy, but the Java result is the same): import java.io.* def baos = new ByteArrayOutputStream(); def stream = new DataOutputStream(baos); stream.writeInt(5000) baos.toByteArray().each { println it } /* outputs - 0, 0, 19, -120 */ and C#: using (var ms = new MemoryStream()) using (EndianBinaryWriter writer = new EndianBinaryWriter(EndianBitConverter.Big, ms, Encoding.UTF8)) { writer.Write(5000); ms.Position = 0; foreach (byte bb in ms.ToArray()) { Console.WriteLine(bb); } } /* outputs - 0, 0, 19, 136 */ As you can see, the last byte is -120 in the Java version and 136 in C#.

    Read the article

  • ghc6 install trouble: hGetContents: invalid argument (invalid UTF-8 byte sequence)

    - by olimay
    Having trouble installing ghc6 on Ubuntu Maverick via apt. Here's what seems to be the relevant error that comes up when I try to (apt-get|aptitude) install ghc6: A package failed to install. Trying to recover: Setting up ghc6 (6.12.1-13ubuntu1) ... ghc-pkg: /home/opm/.ghc/i386-linux-6.12.1/package.conf.d/unix-compat-0.2-edefa7bced91ebe610d455bab466e200.conf: hGetContents: invalid argument (invalid UTF-8 byte sequence) (Here's the full output, if you're interested: http://paste.ubuntu.com/566475/ ) This still happens after apt-get clean and apt-get update. My searching around has not really helped me understand what's going on, except that it might be caused by a mismatch in locale. So, here's the output of locale too: LANG=en_US.utf8 LANGUAGE=en_US:en LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL= Any ideas? Additional background: this all seems very strange to me, because I used to have ghc6 installed correctly--I use XMonad as my main window manager most of the time. I tried to install haskell-platform (through apt), which failed and told me that there was something wrong with ghc6, and so I reinstalled ghc6 and began to get the above error message.

    Read the article

  • Filling a byte array in Java

    - by Corleone
    Hey all! For part of a project I'm working on I am implementing a RTPpacket where I have to fill the header array of byte with RTP header fields. //size of the RTP header: static int HEADER_SIZE = 12; // bytes //Fields that compose the RTP header public int Version; // 2 bits public int Padding; // 1 bit public int Extension; // 1 bit public int CC; // 4 bits public int Marker; // 1 bit public int PayloadType; // 7 bits public int SequenceNumber; // 16 bits public int TimeStamp; // 32 bits public int Ssrc; // 32 bits //Bitstream of the RTP header public byte[] header = new byte[ HEADER_SIZE ]; This was my approach: /* * bits 0-1: Version * bit 2: Padding * bit 3: Extension * bits 4-7: CC */ header[0] = new Integer( (Version << 6)|(Padding << 5)|(Extension << 6)|CC ).byteValue(); /* * bit 0: Marker * bits 1-7: PayloadType */ header[1] = new Integer( (Marker << 7)|PayloadType ).byteValue(); /* SequenceNumber takes 2 bytes = 16 bits */ header[2] = new Integer( SequenceNumber >> 8 ).byteValue(); header[3] = new Integer( SequenceNumber ).byteValue(); /* TimeStamp takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[7-i] = new Integer( TimeStamp >> (8*i) ).byteValue(); /* Ssrc takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[11-i] = new Integer( Ssrc >> (8*i) ).byteValue(); Any other, maybe 'better' ways to do this?

    Read the article

  • What is the Proper approach for Constructing a PhysicalAddress object from Byte Array

    - by Paul Farry
    I'm trying to understand what the correct approach for a constructor that accepts a Byte Array with regard to how it stores it's data (specifically with PhysicalAddress) I have an array of 6 bytes (theAddress) that is constructed once. I have a source array of 18bytes (theAddresses) that is loaded from a TCP Connection. I then copy the 6bytes from theAddress+offset into theAddress and construct the PhysicalAddress from it. Problem is that the PhysicalAddress just stores the Reference to the array that was passed in. Therefore if you subsequently check the addresses they only ever point to the last address that was copied in. When I took a look inside the PhysicalAddress with reflector it's easy to see what's going on. public PhysicalAddress(byte[] address) { this.changed = true; this.address = address; } Now I know this can be solved by creating theAddress array on each pass, but I wanted to find out what really is the best practice for this. Should the constructor of an object that accepts a byte array create it's own private Variable for holding the data and copy it from the original Should it just hold the reference to what was passed in. Should I just created theAddress on each pass in the loop

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • What is the fastest (possibly unsafe) way to read a byte[]?

    - by Aidiakapi
    I'm working on a server project in C#, and after a TCP message is received, it is parsed, and stored in a byte[] of exact size. (Not a buffer of fixed length, but a byte[] of an absolute length in which all data is stored.) Now for reading this byte[] I'll be creating some wrapper functions (also for compatibility), these are the signatures of all functions I need: public byte ReadByte(); public sbyte ReadSByte(); public short ReadShort(); public ushort ReadUShort(); public int ReadInt(); public uint ReadUInt(); public float ReadFloat(); public double ReadDouble(); public string ReadChars(int length); public string ReadString(); The string is a \0 terminated string, and is probably encoded in ASCII or UTF-8, but I cannot tell that for sure, since I'm not writing the client. The data exists of: byte[] _data; int _offset; Now I can write all those functions manually, like this: public byte ReadByte() { return _data[_offset++]; } public sbyte ReadSByte() { byte r = _data[_offset++]; if (r >= 128) return (sbyte)(r - 256); else return (sbyte)r; } public short ReadShort() { byte b1 = _data[_offset++]; byte b2 = _data[_offset++]; if (b1 >= 128) return (short)(b1 * 256 + b2 - 65536); else return (short)(b1 * 256 + b2); } public short ReadUShort() { byte b1 = _data[_offset++]; return (short)(b1 * 256 + _data[_offset++]); } But I wonder if there's a faster way, not excluding the use of unsafe code, since this seems a little bit too much work for simple processing.

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Fast sign in C++ float...are there any platform dependencies in this code?

    - by Patrick Niedzielski
    Searching online, I have found the following routine for calculating the sign of a float in IEEE format. This could easily be extended to a double, too. // returns 1.0f for positive floats, -1.0f for negative floats, 0.0f for zero inline float fast_sign(float f) { if (((int&)f & 0x7FFFFFFF)==0) return 0.f; // test exponent & mantissa bits: is input zero? else { float r = 1.0f; (int&)r |= ((int&)f & 0x80000000); // mask sign bit in f, set it in r if necessary return r; } } (Source: ``Fast sign for 32 bit floats'', Peter Schoffhauzer) I am weary to use this routine, though, because of the bit binary operations. I need my code to work on machines with different byte orders, but I am not sure how much of this the IEEE standard specifies, as I couldn't find the most recent version, published this year. Can someone tell me if this will work, regardless of the byte order of the machine? Thanks, Patrick

    Read the article

  • How do you convert an unsigned int[16] of hexidecimal to an unsigned char array without losing any information?

    - by user1068636
    I have a unsigned int[16] array that when printed out looks like this: 4418703544ED3F688AC208F53343AA59 The code used to print it out is this: for (i = 0; i < 16; i++) printf("%X", CipherBlock[i] / 16), printf("%X",CipherBlock[i] % 16); printf("\n"); I need to pass this unsigned int array "CipherBlock" into a decrypt() method that only takes unsigned char *. How do correctly memcpy everything from the "CipherBlock" array into an unsigned char array without losing information? My understanding is an unsigned int is 4 bytes and unsigned char 1 byte. Since "CipherBlock" is 16 unsigned integers, the total size in bytes = 16 * 4 = 64 bytes. Does this mean my unsigned char[] array needs to be 64 in length? If so, would the following work? unsigned char arr[64] = { '\0' }; memcpy(arr,CipherBlock,64); This does not seem to work. For some reason it only copies the the first byte of "CipherBlock" into "arr". The rest of "arr" is '\0' thereafter.

    Read the article

  • IOException reading a large file from a UNC path into a byte array using .NET

    - by Matt
    I am using the following code to attempt to read a large file (280Mb) into a byte array from a UNC path public void ReadWholeArray(string fileName, byte[] data) { int offset = 0; int remaining = data.Length; log.Debug("ReadWholeArray"); FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read); while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; } } This is blowing up with the following error. System.IO.IOException: Insufficient system resources exist to complete the requested If I run this using a local path it works fine, in my test case the UNC path is actually pointing to the local box. Any thoughts what is going on here ?

    Read the article

  • converting between struct and byte array

    - by chonch
    his question is about converting between a struct and a byte array. Many solutions are based around GCHandle.Alloc() and Marshal.StructureToPtr(). The problem is these calls generate garbage. For example, under Windows CE 6 R3 about 400 bytes of garbarge is made with a small structure. If the code below could be made to work the solution could be considered cleaner. It appears the sizeof() happens too late in the compile to work. public struct Data { public double a; public int b; public double c; } [StructLayout(LayoutKind.Explicit)] public unsafe struct DataWrapper { private static readonly int val = sizeof(Data); [FieldOffset(0)] public fixed byte Arr[val]; // "fixed" is to embed array instead of ref [FieldOffset(0)] public Data; // based on a C++ union }

    Read the article

  • byte[] operations in Java

    - by kape123
    Let's say I have array of bytes: byte[] arr = new byte[] { 0, 1, 2, 3, 4 }; Does platform has functions that I can use to play with this array - for example, how to invert it (get 4,3,2,1,0)? Or, how to invert part of it (2,1,0,3,4)? Get part of array (0,1,2,3)? I know I can manually write functions but I am curious if I'm missing useful util functions in platform that I should know about (and couldn't find any useful guide using google). Thanks!

    Read the article

  • reading a BYTE as a DWORD in Masm

    - by Help I'm in college
    Hi, once again I'm doing MASM programming. I'm trying to write a procedure using the Irvine32 library where the user enters a string which is put into an array of BYTEs with ReadString. Then it loops over that arrray and determines if each character is a number. However, when I try cmp [buffer + ecx], 30h MASM complains about comparing two things that are not the same size. Is there anyway I could read the ASCII code in each BYTE in the array as a DWORD (or otherwise extract the ASCII value in each BYTE)?

    Read the article

  • Last byte missing when casting from varbinary to varchar

    - by xaxie
    I got the last byte losing when converting varbinary to varchar in some case. For example: DECLARE @binary varbinary(8000), @char varchar(8000) set @binary = 0x000082 set @char = CAST(@binary as varchar(8000)) select BinaryLength=DATALENGTH(@binary), CharLength=DATALENGTH(@char) The result is BinaryLength CharLength 3 2 The affected byte value is from 0x81 - 0xFE. The stranger thing is that if I use varchar(MAX) instead varchar(8000) when casting, there is no any problem. Could someone tell me the root cause of the issue? PS: I run the sql in MS SQL server 2008. Thanks!

    Read the article

  • Efficient way to calculate byte length of a character, depending on the encoding

    - by BalusC
    What's the most efficient way to calculate the byte length of a character, taking the character encoding into account? In UTF-8 for example the characters have a variable byte length, so each character needs to be determined individually. As far now I've come up with this: char c = getItSomehow(); String encoding = "UTF-8"; int length = new String(new char[] { c }).getBytes(encoding).length; But this is clumsy and inefficient in a loop since a new String needs to be created everytime. I can't find other and more efficient ways in the Java API. I imagine that this can be done with bitwise operations like bit shifting, but that's my weak point and I'm unsure how to take the encoding into account here :) If you question the need for this, check this topic.

    Read the article

  • how to read the txt file from database(byte[] to filestream)

    - by Ranjana
    i have stored the txt file to sql server database . i need to read the txt file line by line to get the content in it. my code : DataTable dtDeleteFolderFile = new DataTable(); dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0]; foreach (DataRow dr in dtDeleteFolderFile.Rows) { name = dr["FileName"].ToString(); records = Convert.ToInt32(dr["NoOfRecords"].ToString()); bytes = (Byte[])dr["Data"]; } FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open); StreamReader streamreader = new StreamReader(readfile); string line = ""; line = streamreader.ReadLine(); but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byt format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >