Search Results

Search found 3443 results on 138 pages for 'byte'.

Page 25/138 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Are UTF16 (as used by for example wide-winapi functions) characters always 2 byte long?

    - by Cray
    Please clarify for me, how does UTF16 work? I am a little confused, considering these points: There is a static type in C++, WCHAR, which is 2 bytes long. (always 2 bytes long obvisouly) Most of msdn and some other documentation seem to have the assumptions that the characters are always 2 bytes long. This can just be my imagination, I can't come up with any particular examples, but it just seems that way. There are no "extra wide" functions or characters types widely used in C++ or windows, so I would assume that UTF16 is all that is ever needed. To my uncertain knowledge, unicode has a lot more characters than 65535, so they obvisouly don't have enough space in 2 bytes. UTF16 seems to be a bigger version of UTF8, and UTF8 characters can be of different lengths. So if a UTF16 character not always 2 bytes long, how long else could it be? 3 bytes? or only multiples of 2? And then for example if there is a winapi function that wants to know the size of a wide string in characters, and the string contains 2 characters which are each 4 bytes long, how is the size of that string in characters calculated? Is it 2 chars long or 4 chars long? (since it is 8 bytes long, and each WCHAR is 2 bytes)

    Read the article

  • An easy way to replace fread()'s with reading from a byte array?

    - by Sam Washburn
    I have a piece of code that needs to be run from a restricted environment that doesn't allow stdio (Flash's Alchemy compiler). The code uses standard fopen/fread functions and I need to convert it to read from a char* array. Any ideas on how to best approach this? Does a wrapper exist or some library that would help? Thanks! EDIT: I should also mention that it's reading in structs. Like this: fread(&myStruct, 1, sizeof(myStruct), f);

    Read the article

  • How can I tell if a byte array has already been compressed?

    - by MikeG
    Hi, Can I rely on the first few bytes of data compressed using the System.IO.Compression.DeflateStream in .NET always being the same? These bytes seem to always be the 1st bytes: 237, 189, 7, 96, 28, 73, 150, 37, 38, 47 , ... I'm assuming this is some kind of header, I'd like to assume that this header is fixed and isn't going to change. Has anyone got any extra info about this? Background info (The reason I want to know this info is...) I have a load of data in a database table that could do with being made smaller. I've decided I'm going to start compressing the data and not going to bother compressing the existing data. When the data gets into my .NET code the data is a String. I'd like to be able to look at the 1st few bytes of the string and see if it has been compressed, if it has then I need to de-compress it. I was originally thinking I could convert the string to bytes and just try de-compressing the data. Then if an exception happens, I could just assume it wasn't compressed. But I think checking the header bytes would give me much better performance. Many thanks, Mike G

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • Mallocing an unsigned char array to store ints

    - by Max Desmond
    I keep getting a segmentation fault when i test the following code. I am currently unable to find an answer after having searched the web. a = (byte *)malloc(sizeof(byte) * x ) ; for( i = 0 ; i < x-1 ; i++ ) { scanf("%d", &y ) ; a[i] = y ; } Both y and x are initialized. X is the size of the array determined by the user. The segmentation fault is on the second to last integer to be added, i found this by adding printf("roar") ; before setting a[i] to y and entering one number at a time. Byte is a typedef of an unsigned char. Note: I've also tried using a[i] = (byte)y ; A is ininitalized as follows byte *a ; If you need to view the entire code it is this: #include <stdio.h> #include <stdlib.h> #include "sort.h" int p_cmp_f () ; int main( int argc, char *argv[] ) { int x, y, i, choice ; byte *a ; while( choice !=2 ) { printf( "Would you like to sort integers?\n1. Yes\n2. No\n" ) ; scanf("%d", &choice ) ; switch(choice) { case 1: printf( "Enter the length of the array: " ) ; scanf( "%d", &x ) ; a = (byte *)malloc(sizeof( byte ) * x ) ; printf( "Enter %d integers to add to the array: ", x ) ; for( i = 0 ; i < x -1 ; i++ ) { scanf( "%d", &y ) ; a[i] = y ; } switch( choice ) { case 1: bubble_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 2: selection_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x; i++ ) printf( "%d", a[i] ; break ; case 3: insertion_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 4: merge_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 5: quick_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; default: printf("Enter either 1,2,3,4, or 5" ) ; break ; } case 2: printf( "Thank you for using this program\n" ) ; return 0 ; break ; default: printf( "Enter either 1 or 2: " ) ; break ; } } free(a) ; return 0 ; } int p_cmp_f( byte *element1, byte *element2 ) { return *((int *)element1) - *((int *)element2) ; }

    Read the article

  • UDP packets are dropped when its size is less than 12 byte in a certain PC. how do i figure it out the reason?

    - by waan
    Hi. i've stuck in a problem that is never heard about before. i'm making an online game which uses UDP packets in a certain character action. after i developed the udp module, it seems to work fine. though most of our team members have no problem, but a man, who is my boss, told me something is wrong for that module. i have investigated the problem, and finally i found the fact that... on his PC, if udp packet size is less than 12, the packet is never have been delivered to the other host. the following is some additional information: 1~11 bytes udp packets are dropped, 12 bytes and over 12 bytes packets are OK. O/S: Microsoft Windows Vista Business NIC: Attansic L1 Gigabit Ethernet 10/100/1000Base-T Controller WSASendTo returns TRUE. loopback udp packet works fine. how do you think of this problem? and what do you think... what causes this problem? what should i do for the next step for the cause? PS. i don't want to padding which makes length of all the packets up to 12 bytes.

    Read the article

  • How can I get the JSON array data from nsstring or byte in xcode 4.2?

    - by user1471568
    I'm trying to get values from nsdata class and doesn't work. here is my JSON data. { "count": 3, "item": [{ "id": "1", "latitude": "37.556811", "longitude": "126.922015", "imgUrl": "http://175.211.62.15/sample_res/1.jpg", "found": false }, { "id": "3", "latitude": "37.556203", "longitude": "126.922629", "imgUrl": "http://175.211.62.15/sample_res/3.jpg", "found": false }, { "id": "2", "latitude": "37.556985", "longitude": "126.92286", "imgUrl": "http://175.211.62.15/sample_res/2.jpg", "found": false }] } and here is my code -(NSDictionary *)getDataFromItemList { NSData *dataBody = [[NSData alloc] initWithBytes:buffer length:sizeof(buffer)]; NSDictionary *iTem = [[NSDictionary alloc]init]; iTem = [NSJSONSerialization JSONObjectWithData:dataBody options:NSJSONReadingMutableContainers error:nil]; NSLog(@"id = %@",[iTem objectForKey:@"id"]); //for Test output = [[NSString alloc] initWithBytes:buffer length:rangeHeader.length encoding:NSUTF8StringEncoding]; NSLog(@"%@",output); return iTem; } how can I access every value in the JSON? Please help me.

    Read the article

  • Steganography Experiment - Trouble hiding message bits in DCT coefficients

    - by JohnHankinson
    I have an application requiring me to be able to embed loss-less data into an image. As such I've been experimenting with steganography, specifically via modification of DCT coefficients as the method I select, apart from being loss-less must also be relatively resilient against format conversion, scaling/DSP etc. From the research I've done thus far this method seems to be the best candidate. I've seen a number of papers on the subject which all seem to neglect specific details (some neglect to mention modification of 0 coefficients, or modification of AC coefficient etc). After combining the findings and making a few modifications of my own which include: 1) Using a more quantized version of the DCT matrix to ensure we only modify coefficients that would still be present should the image be JPEG'ed further or processed (I'm using this in place of simply following a zig-zag pattern). 2) I'm modifying bit 4 instead of the LSB and then based on what the original bit value was adjusting the lower bits to minimize the difference. 3) I'm only modifying the blue channel as it should be the least visible. This process must modify the actual image and not the DCT values stored in file (like jsteg) as there is no guarantee the file will be a JPEG, it may also be opened and re-saved at a later stage in a different format. For added robustness I've included the message multiple times and use the bits that occur most often, I had considered using a QR code as the message data or simply applying the reed-solomon error correction, but for this simple application and given that the "message" in question is usually going to be between 10-32 bytes I have plenty of room to repeat it which should provide sufficient redundancy to recover the true bits. No matter what I do I don't seem to be able to recover the bits at the decode stage. I've tried including / excluding various checks (even if it degrades image quality for the time being). I've tried using fixed point vs. double arithmetic, moving the bit to encode, I suspect that the message bits are being lost during the IDCT back to image. Any thoughts or suggestions on how to get this working would be hugely appreciated. (PS I am aware that the actual DCT/IDCT could be optimized from it's naive On4 operation using row column algorithm, or an FDCT like AAN, but for now it just needs to work :) ) Reference Papers: http://www.lokminglui.com/dct.pdf http://arxiv.org/ftp/arxiv/papers/1006/1006.1186.pdf Code for the Encode/Decode process in C# below: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing.Imaging; using System.Drawing; namespace ImageKey { public class Encoder { public const int HIDE_BIT_POS = 3; // use bit position 4 (1 << 3). public const int HIDE_COUNT = 16; // Number of times to repeat the message to avoid error. // JPEG Standard Quantization Matrix. // (to get higher quality multiply by (100-quality)/50 .. // for lower than 50 multiply by 50/quality. Then round to integers and clip to ensure only positive integers. public static double[] Q = {16,11,10,16,24,40,51,61, 12,12,14,19,26,58,60,55, 14,13,16,24,40,57,69,56, 14,17,22,29,51,87,80,62, 18,22,37,56,68,109,103,77, 24,35,55,64,81,104,113,92, 49,64,78,87,103,121,120,101, 72,92,95,98,112,100,103,99}; // Maximum qauality quantization matrix (if all 1's doesn't modify coefficients at all). public static double[] Q2 = {1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1}; public static Bitmap Encode(Bitmap b, string key) { Bitmap response = new Bitmap(b.Width, b.Height, PixelFormat.Format32bppArgb); uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Start be transferring the unmodified image portions. // As we'll be using slightly less width/height for the encoding process we'll need the edges to be populated. for (int y = 0; y < b.Height; y++) for (int x = 0; x < b.Width; x++) { if( (x >= imgWidth && x < b.Width) || (y>=imgHeight && y < b.Height)) response.SetPixel(x, y, b.GetPixel(x, y)); } // Setup the counters and byte data for the message to encode. StringBuilder sb = new StringBuilder(); for(int i=0;i<HIDE_COUNT;i++) sb.Append(key); byte[] codeBytes = System.Text.Encoding.ASCII.GetBytes(sb.ToString()); int bitofs = 0; // Current bit position we've encoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to encode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] redData = GetRedChannelData(b, x, y); int[] greenData = GetGreenChannelData(b, x, y); int[] blueData = GetBlueChannelData(b, x, y); int[] newRedData; int[] newGreenData; int[] newBlueData; if (bitofs < totalBits) { double[] redDCT = DCT(ref redData); double[] greenDCT = DCT(ref greenData); double[] blueDCT = DCT(ref blueData); int[] redDCTI = Quantize(ref redDCT, ref Q2); int[] greenDCTI = Quantize(ref greenDCT, ref Q2); int[] blueDCTI = Quantize(ref blueDCT, ref Q2); int[] blueDCTC = Quantize(ref blueDCT, ref Q); HideBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); double[] redDCT2 = DeQuantize(ref redDCTI, ref Q2); double[] greenDCT2 = DeQuantize(ref greenDCTI, ref Q2); double[] blueDCT2 = DeQuantize(ref blueDCTI, ref Q2); newRedData = IDCT(ref redDCT2); newGreenData = IDCT(ref greenDCT2); newBlueData = IDCT(ref blueDCT2); } else { newRedData = redData; newGreenData = greenData; newBlueData = blueData; } MapToRGBRange(ref newRedData); MapToRGBRange(ref newGreenData); MapToRGBRange(ref newBlueData); for(int dy=0;dy<8;dy++) { for(int dx=0;dx<8;dx++) { int col = (0xff<<24) + (newRedData[dx+(dy*8)]<<16) + (newGreenData[dx+(dy*8)]<<8) + (newBlueData[dx+(dy*8)]); response.SetPixel(x+dx,y+dy,Color.FromArgb(col)); } } } } if (bitofs < totalBits) throw new Exception("Failed to encode data - insufficient cover image coefficients"); return (response); } public static void HideBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { int tempValue = 0; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ( (u != 0 || v != 0) && CMatrix[v+(u*8)] != 0 && DCTMatrix[v+(u*8)] != 0) { if (bitofs < totalBits) { tempValue = DCTMatrix[v + (u * 8)]; int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); byte value = (byte)((codeBytes[bytePos] & mask) >> bitPos); // 0 or 1. if (value == 0) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a != 0) DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS) - 1; DCTMatrix[v + (u * 8)] &= ~(1 << HIDE_BIT_POS); } else if (value == 1) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a == 0) DCTMatrix[v + (u * 8)] &= ~((1 << HIDE_BIT_POS) - 1); DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS); } if (DCTMatrix[v + (u * 8)] != 0) bitofs++; else DCTMatrix[v + (u * 8)] = tempValue; } } } } } public static void MapToRGBRange(ref int[] data) { for(int i=0;i<data.Length;i++) { data[i] += 128; if(data[i] < 0) data[i] = 0; else if(data[i] > 255) data[i] = 255; } } public static int[] GetRedChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x,y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 16) & 0xff) - 128; } } return (data); } public static int[] GetGreenChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 8) & 0xff) - 128; } } return (data); } public static int[] GetBlueChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 0) & 0xff) - 128; } } return (data); } public static int[] Quantize(ref double[] DCTMatrix, ref double[] Q) { int[] DCTMatrixOut = new int[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (int)Math.Round(DCTMatrix[v + (u * 8)] / Q[v + (u * 8)]); } } return(DCTMatrixOut); } public static double[] DeQuantize(ref int[] DCTMatrix, ref double[] Q) { double[] DCTMatrixOut = new double[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (double)DCTMatrix[v + (u * 8)] * Q[v + (u * 8)]; } } return(DCTMatrixOut); } public static double[] DCT(ref int[] data) { double[] DCTMatrix = new double[8 * 8]; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double sum = 0.0; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double s = data[x + (y * 8)]; double dctVal = Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += s * dctVal; } } DCTMatrix[u + (v * 8)] = (0.25 * cu * cv * sum); } } return (DCTMatrix); } public static int[] IDCT(ref double[] DCTMatrix) { int[] Matrix = new int[8 * 8]; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double sum = 0; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double idctVal = (cu * cv) / 4.0 * Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += (DCTMatrix[u + (v * 8)] * idctVal); } } Matrix[x + (y * 8)] = (int)Math.Round(sum); } } return (Matrix); } } public class Decoder { public static string Decode(Bitmap b, int expectedLength) { expectedLength *= Encoder.HIDE_COUNT; uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Setup the counters and byte data for the message to decode. byte[] codeBytes = new byte[expectedLength]; byte[] outBytes = new byte[expectedLength / Encoder.HIDE_COUNT]; int bitofs = 0; // Current bit position we've decoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to decode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] blueData = ImageKey.Encoder.GetBlueChannelData(b, x, y); double[] blueDCT = ImageKey.Encoder.DCT(ref blueData); int[] blueDCTI = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q2); int[] blueDCTC = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q); if (bitofs < totalBits) GetBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); } } bitofs = 0; for (int i = 0; i < (expectedLength / Encoder.HIDE_COUNT) * 8; i++) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); List<int> values = new List<int>(); int zeroCount = 0; int oneCount = 0; for (int j = 0; j < Encoder.HIDE_COUNT; j++) { int val = (codeBytes[bytePos + ((expectedLength / Encoder.HIDE_COUNT) * j)] & mask) >> bitPos; values.Add(val); if (val == 0) zeroCount++; else oneCount++; } if (oneCount >= zeroCount) outBytes[bytePos] |= mask; bitofs++; values.Clear(); } return (System.Text.Encoding.ASCII.GetString(outBytes)); } public static void GetBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ((u != 0 || v != 0) && CMatrix[v + (u * 8)] != 0 && DCTMatrix[v + (u * 8)] != 0) { if (bitofs < totalBits) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); int value = DCTMatrix[v + (u * 8)] & (1 << Encoder.HIDE_BIT_POS); if (value != 0) codeBytes[bytePos] |= mask; bitofs++; } } } } } } } UPDATE: By switching to using a QR Code as the source message and swapping a pair of coefficients in each block instead of bit manipulation I've been able to get the message to survive the transform. However to get the message to come through without corruption I have to adjust both coefficients as well as swap them. For example swapping (3,4) and (4,3) in the DCT matrix and then respectively adding 8 and subtracting 8 as an arbitrary constant seems to work. This survives a re-JPEG'ing of 96 but any form of scaling/cropping destroys the message again. I was hoping that by operating on mid to low frequency values that the message would be preserved even under some light image manipulation.

    Read the article

  • Java to JavaScript (Encryption related)

    - by balexandre
    Hi guys, I'm having difficulties to get the same string in Javascript and I'm thinking that I'm doing something wrong... Java code: import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.Date; import java.util.GregorianCalendar; import sun.misc.BASE64Encoder; private static String getBase64Code(String input) throws UnsupportedEncodingException, NoSuchAlgorithmException { String base64 = ""; byte[] txt = input.getBytes("UTF8"); byte[] text = new byte[txt.length+3]; text[0] = (byte)239; text[1] = (byte)187; text[2] = (byte)191; for(int i=0; i<txt.length; i++) text[i+3] = txt[i]; MessageDigest md = MessageDigest.getInstance("MD5"); md.update(text); byte digest[] = md.digest(); BASE64Encoder encoder = new BASE64Encoder(); base64 = encoder.encode(digest); return base64; } I'm trying this using Paj's MD5 script as well Farhadi Base 64 Encode script but my tests fail completely :( my code: function CalculateCredentialsSecret(type, user, pwd) { var days = days_between(new Date(), new Date(2000, 1, 1)); var str = type.toUpperCase() + user.toUpperCase() + pwd.toUpperCase() + days; var md5 = hex_md5(str); var b64 = base64Encode(md5); return encodeURIComponent(b64); } Does anyone know how can I convert this Java method into a Javascript one? Thank you Tests (for today, 3740 days after January 1st, 2000 var secret = CalculateCredentialsSecret('AAA', 'BBB', 'CCC'); // secret SHOULD be: S3GYAfGWlmrhuoNsIJF94w==

    Read the article

  • Images from SQL Server JPG/PNG Image Column not being Type Converted to Bitmap in HttpHandlers (cons

    - by kanchirk
    Our Silverlight 3.0 Client consumes Images stored/retrieved on the File System thorough ASP.NET HttpHandlers successfully. We are trying to store and read back Images using a SQL Server 2008 Database. Please find the stripped down code pasted below with the Exception. "Bitmap is not Valid" //Store document to the database private void SaveImageToDatabaseKK(HttpContext context, string pImageFileName) { try { //ADO.NET Entity Framework ImageTable documentDB = new ImageTable(); int intLength = Convert.ToInt32(context.Request.InputStream.Length); //Move the file contents into the Byte array Byte[] arrContent = new Byte[intLength]; context.Request.InputStream.Read(arrContent, 0, intLength); //Insert record into the Document table documentDB.InsertDocument(pImageFileName, arrContent, intLength); } catch { } } =The method to Read Back the Row from the Table and Send it back is below.= private void RetrieveImageFromDatabaseTableKK(HttpContext context, string pImageName) { try { ImageTable documentDB = new ImageTable(); var docRow = documentDB.GetDocument(pImageName); //based on Imagename which is unique //DocData column in table is **Image** if (docRow!=null && docRow.DocData != null && docRow.DocData.Length > 0) { Byte[] bytImage = docRow.DocData; if (bytImage != null && bytImage.Length > 0) { Bitmap newBmp = ConvertToBitmap(context, bytImage ); if (newBmp != null) { newBmp.Save(context.Response.OutputStream, ImageFormat.Jpeg); newBmp.Dispose(); } } } } catch (Exception exRI) { } } // Convert byte array to Bitmap (byte[] to Bitmap) protected Bitmap ConvertToBitmap(byte[] bmp) { if (bmp != null) { try { TypeConverter tc = TypeDescriptor.GetConverter(typeof(Bitmap)); Bitmap b = (Bitmap)tc.ConvertFrom(bmp); **//This is where the Exception Occurs.** return b; } catch (Exception) { } } return null; }

    Read the article

  • Calculate an Internet (aka IP, aka RFC791) checksum in C#

    - by Pat
    Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share? Remember, the internet protocol specifies that: "The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero." More explanation can be found from Dr. Math. There are some efficiency pointers available, but that's not really a large concern for me at this point. Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-) Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests. [TestMethod()] public void InternetChecksum_SimplestValidValue_ShouldMatch() { IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros ushort expected = 0xFFFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch() { IEnumerable<byte> value = new byte[]{0xFF}; ushort expected = 0xFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch() { IEnumerable<byte> value = new byte[] { 0x00, 0xFF }; ushort expected = 0xFF00; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); }

    Read the article

  • Encrypt string with public key only

    - by vlahovic
    i'm currently working on a android project where i need to encrypt a string using 128 bit AES, padding PKCS7 and CBC. I don't want to use any salt for this. I've tried loads of different variations including PBEKey but i can't come up with working code. This is what i currently have: String plainText = "24124124123"; String pwd = "BobsPublicPassword"; byte[] key = pwd.getBytes(); key = cutArray(key, 16); byte[] input = plainText.getBytes(); byte[] output = null; SecretKeySpec keySpec = null; keySpec = new SecretKeySpec(key, "AES"); Cipher cipher = Cipher.getInstance("AES/CBC/PKCS7Padding"); cipher.init(Cipher.ENCRYPT_MODE, keySpec); output = cipher.doFinal(input); private static byte[] cutArray(byte[] arr, int length){ byte[] resultArr = new byte[length]; for(int i = 0; i < length; i++ ){ resultArr[i] = arr[i]; } return resultArr; } Any help appreciated //Vlahovic

    Read the article

  • Java socket bug on linux (0xFF sent, -3 received)

    - by Marius
    While working on a WebSocket server in Java I came across this strange bug. I've reduced it down to two small java files, one is the server, the other is the client. The client simply sends 0x00, the string Hello and then 0xFF (per the WebSocket specification). On my windows machine, the server prints the following: Listening byte: 0 72 101 108 108 111 recieved: 'Hello' While on my unix box the same code prints the following: Listening byte: 0 72 101 108 108 111 -3 Instead of receiving 0xFF it gets -3, never breaks out of the loop and never prints what it has received. The important part of the code looks like this: byte b = (byte)in.read(); System.out.println("byte: "+b); StringBuilder input = new StringBuilder(); b = (byte)in.read(); while((b & 0xFF) != 0xFF){ input.append((char)b); System.out.print(b+" "); b = (byte)in.read(); } inputLine = input.toString(); System.out.println("recieved: '" + inputLine+"'"); if(inputLine.equals("bye")){ break; } I've also uploaded the two files to my server: Server.java Client.java My Windows machine is running windows 7 and my Linux machine is running Debian

    Read the article

  • Java to JavaScript (Encryptation related)

    - by balexandre
    Hi guys, I'm having dificulties to get the same string in Javascript and I'm thinking that I'm doing something wrong... Java code: import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.Date; import java.util.GregorianCalendar; import sun.misc.BASE64Encoder; private static String getBase64Code(String input) throws UnsupportedEncodingException, NoSuchAlgorithmException { String base64 = ""; byte[] txt = input.getBytes("UTF8"); byte[] text = new byte[txt.length+3]; text[0] = (byte)239; text[1] = (byte)187; text[2] = (byte)191; for(int i=0; i<txt.length; i++) text[i+3] = txt[i]; MessageDigest md = MessageDigest.getInstance("MD5"); md.update(text); byte digest[] = md.digest(); BASE64Encoder encoder = new BASE64Encoder(); base64 = encoder.encode(digest); return base64; } I'm trying this using Paj's MD5 script as well Farhadi Base 64 Encode script but my tests fail completly :( my code: function CalculateCredentialsSecret(type, user, pwd) { var days = days_between(new Date(), new Date(2000, 1, 1)); var str = type.toUpperCase() + user.toUpperCase() + pwd.toUpperCase() + days; var md5 = any_md5('', str); var b64 = base64Encode(md5); return encodeURIComponent(b64); } Does anyone know how can I convert this Java method into a Javascript one? Thank you

    Read the article

  • c++ to vb.net , problem with callback function

    - by johan
    I'm having a hard time here trying to find a solution for my problem. I'm trying to convert a client API funktion from C++ to VB.NET, and i think have some problems with the callback function. parts of the C++ code: typedef struct{ BYTE m_bRemoteChannel; BYTE m_bSendMode; BYTE m_nImgFormat; // =0 cif ; = 1 qcif char *m_sIPAddress; char *m_sUserName; char *m_sUserPassword; BOOL m_bUserCheck; HWND m_hShowVideo; }CLIENT_VIDEOINFO, *PCLIENT_VIDEOINFO; CPLAYER_API LONG __stdcall MP4_ClientStart(PCLIENT_VIDEOINFO pClientinfo,void(CALLBACK *ReadDataCallBack)(DWORD nPort,UCHAR *pPacketBuffer,DWORD nPacketSize)); void CALLBACK ReadDataCallBack(DWORD nPort,UCHAR *pPacketBuffer,DWORD nPacketSize) { TRACE("%d\n",nPacketSize); } ..... aa5.m_sUserName = "123"; aa5.m_sUserPassword="w"; aa5.m_bUserCheck = TRUE; MP4_ClientSetTTL(64); nn1 = MP4_ClientStart(&aa5,ReadDataCallBack); if (nn1 == -1) { MessageBox("error"); return; } SDK description: MP4_ClientStart This function starts a connection. The format of the call is: LONG __stdcall MP4_ClientStart(PCLIENT_VIDEOINFO pClientinfo, void(*ReadDataCallBack)(DWORD nChannel,UCHAR *pPacketBuffer,DWORD nPacketSize)) Parameters pClientinfo holds the information. of this connection. nChannel holds the channel of card. pPacketBuffer holds the pointer to the receive buffer. nPacketSize holds the length of the receive buffer. Return Values If the function succeeds the return value is the context of this connection. If the function fails the return value is -1. Remarks typedef struct{ BYTE m_bRemoteChannel; BYTE m_bSendMode; BYTE m_bImgFormat; char *m_sIPAddress; char *m_sUserName; char *m_sUserPassword; BOOL m_bUserCheck; HWND m_hShowVideo; } CLIENT_VIDEOINFO, * PCLIENT_VIDEOINFO; m_bRemoteChannel holds the channel which the client wants to connect to. m_bSendMode holds the network mode of the connection. m_bImgFormat : Image format, 0 is main channel video, 1 is sub channel video m_sIPAddress holds the IP address of the server. m_sUserName holds the user’s name. m_sUserPassword holds the user’s password. m_bUserCheck holds the value whether sends the user’s name and password or not. m_hShowVideo holds Handle for this video window. If m_hShowVideo holds NULL, the client can be record only without decoder. If m_bUserCheck is FALSE, we will send m_sUserName and m_sUserPassword as NULL, else we will send each 50 bytes. The length of m_sIPAddress and m_sUserName must be more than 50 bytes. ReadDataCallBack: When the library receives a packet from a server, this callback is called. My VB.Net code: Imports System.Runtime.InteropServices Public Class Form1 Const WM_USER = &H400 Public Structure CLIENT_VIDEOINFO Public m_bRemoteChannel As Byte Public m_bSendMode As Byte Public m_bImgFormat As Byte Public m_sIPAddress As String Public m_sUserName As String Public m_sUserPassword As String Public m_bUserCheck As Boolean Public m_hShowVideo As Long 'hWnd End Structure Public Declare Function MP4_ClientSetNetPort Lib "hikclient.dll" (ByVal dServerPort As Integer, ByVal dClientPort As Integer) As Boolean Public Declare Function MP4_ClientStartup Lib "hikclient.dll" (ByVal nMessage As UInteger, ByVal hWnd As System.IntPtr) As Boolean <DllImport("hikclient.dll")> Public Shared Function MP4_ClientStart(ByVal Clientinfo As CLIENT_VIDEOINFO, ByRef ReadDataCallBack As CALLBACKdel) As Long End Function Public Delegate Sub CALLBACKdel(ByVal nPort As Long, <MarshalAs(UnmanagedType.LPArray)> ByRef pPacketBuffer As Byte(), ByVal nPacketSize As Long) Public Sub CALLBACK(ByVal nPort As Long, <MarshalAs(UnmanagedType.LPArray)> ByRef pPacketBuffer As Byte(), ByVal nPacketSize As Long) End Sub Public mydel As New CALLBACKdel(AddressOf CALLBACK) Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim Clientinfo As New CLIENT_VIDEOINFO() Clientinfo.m_bRemoteChannel = 0 Clientinfo.m_bSendMode = 0 Clientinfo.m_bImgFormat = 0 Clientinfo.m_sIPAddress = "193.168.1.100" Clientinfo.m_sUserName = "1" Clientinfo.m_sUserPassword = "a" Clientinfo.m_bUserCheck = False Clientinfo.m_hShowVideo = Me.Handle 'Nothing MP4_ClientSetNetPort(850, 850) MP4_ClientStartup(WM_USER + 1, Me.Handle) MP4_ClientStart(Clientinfo, mydel) End Sub End Class here is some other examples of the code in: C# http://blog.csdn.net/nenith1981/archive/2007/09/17/1787692.aspx VB ://read.pudn.com/downloads70/sourcecode/graph/250633/MD%E5%AE%A2%E6%88%B7%E7%AB%AF%28VB%29/hikclient.bas__.htm ://read.pudn.com/downloads70/sourcecode/graph/250633/MD%E5%AE%A2%E6%88%B7%E7%AB%AF%28VB%29/Form1.frm__.htm Delphi ://read.pudn.com/downloads91/sourcecode/multimedia/streaming/349759/Delphi_client/Unit1.pas__.htm

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • In Wireshark's Protocol Hierarchy Statistics screen, is the total byte count of a capture the sum of the Bytes column or just the top line (Frame)?

    - by Howiecamp
    Part 1 - I'm looking at Wireshark's Protocol Hierarchy Statistics screen (sample below), is the total byte count of the capture the sum of the Bytes column or just the top line (Frame)? I'm 99% that it's the latter because of protocol rollup but I wanted to conform. Part 2 - From Wireshark documentation on this screen, "Protocol layers can consist of packets that won't contain any higher layer protocol, so the sum of all higher layer packets may not sum up to the protocols packet count. Example: In the screenshot TCP has 85,83% but the sum of the subprotocols (HTTP, ...) is much less. This may be caused by TCP protocol overhead, e.g. TCP ACK packets won't be counted as packets of the higher layer)." Can you explain this?

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Is there any way that an export-to-Excel function can be scalable?

    - by MusiGenesis
    Summary: ASP.Net website with a couple hundred users. Data is exported to Excel files which can be relatively large (~5 MB). In the pilot phase (just a few users), we are already seeing occasional errors on the server in the exporting method. Here's the stack trace: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.IO.MemoryStream.set_Capacity(Int32 value) at System.IO.MemoryStream.EnsureCapacity(Int32 value) at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.TrackingMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.SparseMemoryStream.WriteAndCollapseBlocks(Byte[ ] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.SparseMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.CompressEmulationStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.CompressStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Zip.ProgressiveCrcCalculatingStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Zip.ZipIOModeEnforcingStream.Write(Byte[] buffer, Int32 offset, Int32 count) at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder) at System.IO.StreamWriter.Write(String value) at System.Xml.XmlTextEncoder.Write(String text) at System.Xml.XmlTextWriter.WriteString(String text) at System.Xml.XmlText.WriteTo(XmlWriter w) at System.Xml.XmlAttribute.WriteContentTo(XmlWriter w) at System.Xml.XmlAttribute.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlDocument.WriteContentTo(XmlWriter xw) at System.Xml.XmlDocument.WriteTo(XmlWriter w) at System.Xml.XmlDocument.Save(Stream outStream) at OfficeOpenXml.ExcelWorksheet.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelWorksheet.cs:line 605 at OfficeOpenXml.ExcelWorkbook.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelWorkbook.cs:line 439 at OfficeOpenXml.ExcelPackage.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelPackage.cs:line 348 at Framework.Exporting.Business.ExcelExport.BuildReport(HttpContext context) at WebUserControl.BtnXLS_Click(Object sender, EventArgs e) in C:\TEMP\XXXXXXXXXX\XXXXXXXXXX\OneList\UserControls\TicketReportExporter. ascx.cs:line 108 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.Rai sePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.XXXXXXXXXXX_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\XXXX\cdf32a52\d1a5eabd\App_Web_enxdwlks.1.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpAppli cation.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Even aside from this particular problem, in general exporting to Excel requires the instantiation of huge Excel objects on the server for each request, which I've always assumed to mean disqualifies Excel for "serious" work on a highly-loaded server. Is there any general way to export to Excel in a "light-weight" manner? Would simply streaming the data into a CSV file work for this?

    Read the article

  • SQL Server 2008 Filestream Impersonation Access Denied error

    - by Adi
    I've been trying to upload a file to the database using SQL SERVER 2008 Filestream and Impersonation technique to save the file in the file system, but i keep getting Access Denied error; even though i've set the permissions for the impersonating user to the Filestream folder(C:\SQLFILESTREAM\Dev_DB). when i debugged the code, i found the server return a unc path(\Server_Name\MSSQLSERVER\v1\Dev_LMDB\dbo\FileData\File_Data\13C39AB1-8B91-4F5A-81A1-940B58504C17), which was not accessible through windows explorer. I've my web application hosted on local maching(Windows 7). SQL Server is located on a remote server(Windows Server 2008 R2). Sql authentication was used to call the stored procedure. Following is the code i've used to do the above operations. SqlCommand sqlCmd = new SqlCommand("AddFile"); sqlCmd.CommandType = CommandType.StoredProcedure; sqlCmd.Parameters.Add("@File_Name", SqlDbType.VarChar, 512).Value = filename; sqlCmd.Parameters.Add("@File_Type", SqlDbType.VarChar, 5).Value = Path.GetExtension(filename); sqlCmd.Parameters.Add("@Username", SqlDbType.VarChar, 20).Value = username; sqlCmd.Parameters.Add("@Output_File_Path", SqlDbType.VarChar, -1).Direction = ParameterDirection.Output; DAManager PDAM = new DAManager(DAManager.getConnectionString()); using (SqlConnection connection = (SqlConnection)PDAM.CreateConnection()) { connection.Open(); SqlTransaction transaction = connection.BeginTransaction(); WindowsImpersonationContext wImpersonationCtx; NetworkSecurity ns = null; try { PDAM.ExecuteNonQuery(sqlCmd, transaction); string filepath = sqlCmd.Parameters["@Output_File_Path"].Value.ToString(); sqlCmd = new SqlCommand("SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"); sqlCmd.CommandType = CommandType.Text; byte[] Context = (byte[])PDAM.ExecuteScalar(sqlCmd, transaction); byte[] buffer = new byte[4096]; int bytedRead; ns = new NetworkSecurity(); wImpersonationCtx = ns.ImpersonateUser(IMP_Domain, IMP_Username, IMP_Password, LogonType.LOGON32_LOGON_INTERACTIVE, LogonProvider.LOGON32_PROVIDER_DEFAULT); SqlFileStream sfs = new SqlFileStream(filepath, Context, System.IO.FileAccess.Write); while ((bytedRead = inFS.Read(buffer, 0, buffer.Length)) != 0) { sfs.Write(buffer, 0, bytedRead); } sfs.Close(); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); } finally { sqlCmd.Dispose(); connection.Close(); connection.Dispose(); ns.undoImpersonation(); wImpersonationCtx = null; ns = null; } } Can someone help me with this issue. Reference Exception: Type : System.ComponentModel.Win32Exception, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Access is denied Source : System.Data Help link : NativeErrorCode : 5 ErrorCode : -2147467259 Data : System.Collections.ListDictionaryInternal TargetSite : Void OpenSqlFileStream(System.String, Byte[], System.IO.FileAccess, System.IO.FileOptions, Int64) Stack Trace : at System.Data.SqlTypes.SqlFileStream.OpenSqlFileStream(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access) Thanks

    Read the article

  • Printf in assembler doesn't print

    - by Gaim
    Hi there, I have got a homework to hack program using buffer overflow ( with disassambling, program was written in C++, I haven't got the source code ). I have already managed it but I have a problem. I have to print some message on the screen, so I found out address of printf function, pushed address of "HACKED" and address of "%s" on the stack ( in this order ) and called that function. Called code passed well but nothing had been printed. I have tried to simulate the environment like in other place in the program but there has to be something wrong. Do you have any idea what I am doing wrong that I have no output, please? Thanks a lot EDIT: This program is running on Windows XP SP3 32b, written in C++, Intel asm there is the "hack" code CPU Disasm Address Hex dump Command Comments 0012F9A3 90 NOP ;hack begins 0012F9A4 90 NOP 0012F9A5 90 NOP 0012F9A6 89E5 MOV EBP,ESP 0012F9A8 83EC 7F SUB ESP,7F ;creating a place for working data 0012F9AB 83EC 7F SUB ESP,7F 0012F9AE 31C0 XOR EAX,EAX 0012F9B0 50 PUSH EAX 0012F9B1 50 PUSH EAX 0012F9B2 50 PUSH EAX 0012F9B3 89E8 MOV EAX,EBP 0012F9B5 83E8 09 SUB EAX,9 0012F9B8 BA 1406EDFF MOV EDX,FFED0614 ;address to jump, it is negative because there mustn't be 00 bytes 0012F9BD F7DA NOT EDX 0012F9BF FFE2 JMP EDX ;I have to jump because there are some values overwritten by the program 0012F9C1 90 NOP 0012F9C2 0090 00000000 ADD BYTE PTR DS:[EAX],DL 0012F9C8 90 NOP 0012F9C9 90 NOP 0012F9CA 90 NOP 0012F9CB 90 NOP 0012F9CC 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9CD 65:6E OUTS DX,BYTE PTR GS:[ESI] ; I/O command 0012F9CF 67:74 68 JE SHORT 0012FA3A ; Superfluous address size prefix 0012F9D2 2069 73 AND BYTE PTR DS:[ECX+73],CH 0012F9D5 203439 AND BYTE PTR DS:[EDI+ECX],DH 0012F9D8 34 2C XOR AL,2C 0012F9DA 2066 69 AND BYTE PTR DS:[ESI+69],AH 0012F9DD 72 73 JB SHORT 0012FA52 0012F9DF 74 20 JE SHORT 0012FA01 0012F9E1 3120 XOR DWORD PTR DS:[EAX],ESP 0012F9E3 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9E4 696E 65 7300909 IMUL EBP,DWORD PTR DS:[ESI+65],-6F6FFF8D 0012F9EB 90 NOP 0012F9EC 90 NOP 0012F9ED 90 NOP 0012F9EE 31DB XOR EBX,EBX ; hack continues 0012F9F0 8818 MOV BYTE PTR DS:[EAX],BL ; writing 00 behind word "HACKED" 0012F9F2 83E8 06 SUB EAX,6 0012F9F5 50 PUSH EAX ; address of "HACKED" 0012F9F6 B8 3B8CBEFF MOV EAX,FFBE8C3B 0012F9FB F7D0 NOT EAX 0012F9FD 50 PUSH EAX ; address of "%s" 0012F9FE B8 FFE4BFFF MOV EAX,FFBFE4FF 0012FA03 F7D0 NOT EAX 0012FA05 FFD0 CALL EAX ;address of printf This code is really ugly because I am new in assembler and there mustn't be null bytes because of buffer-overflow bug

    Read the article

  • A reference that is not to 'const' cannot be bound to a non-lvalue

    - by Bert
    Hello, Am struggling a bit with this. Am declaring: BYTE *pImage = NULL; Used in call: m_pMyInterface-GetImage(i, &imageSize, &pImage); Visual C++ 2003 compiler error: error C2664: 'CJrvdInterface::GetImage' : cannot convert parameter 3 from 'BYTE **__w64 ' to 'BYTE **& ' A reference that is not to 'const' cannot be bound to a non-lvalue The method called is defined as: void CMyInterface::GetImage(const int &a_iTileId, ULONG *a_pulImageSize, BYTE** &a_ppbImage) { (...) Any help much appreciated, Bert

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >