Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 113/151 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Python - Finding unicode/ascii problems

    - by user330739
    Hi all, I am csv.reader to pull in info from a very long sheet. I am doing work on that data set and then I am using the xlwt package to give me a workable excel file. However, I get this error: UnicodeDecodeError: 'ascii' codec can't decode byte 0x92 in position 34: ordinal not in range(128) My question to you all is, how can I find exactly where that error is in my data set? Also, is there some code that I can write which will look through my data set and find out where the issues lie (because some data sets run without the above error and others have problems)?

    Read the article

  • Facebook user_id as MongoDB BSON ObjectId?

    - by MattDiPasquale
    I'm rebuilding Lovers on Facebook with Sinatra & Redis. I like Redis because it doesn't have the long (12-byte) BSON ObjectIds and I am storing sets of Facebook user_ids for each user. The sets are requests_sent, requests_received, & relationships, and they all contain Facebook user ids. I'm thinking of switching to MongoDB because I want to use it's geospatial indexing. If I do, I'd want to use the FB user ids as the _id field because I want the sets to be small and I want the JSON responses to be small. But, is the BSON ObjectId better (more efficient for MongoDB) to use than just an integer (fb user_id)?

    Read the article

  • How to get Host OS (operating system) parameter of ZIP archive in C#

    - by bao
    Look. I have ZIP archives prepared in different os'es: mac, linux, windows. In windows file names encoded in DOS CP866, mac & linux in UTF-8. I need to know (in code) in which os zip file was prepared to decode file names correctly. There is a Host OS paramterer in "Central directory structure" of zip file (look http://www.fileformat.info/format/zip/corion.htm ). How to get 0005h 1 byte Host OS parameter in C#?

    Read the article

  • CryptGenRandom to generate asp.net session id

    - by DoDo
    Hi! does anyone have working example of CryptGenrRandom class to generate session id (need to use in my iis module). HCRYPTPROV hCryptProv; BYTE pbData[16]; if(CryptAcquireContext( &hCryptProv, NULL, NULL, PROV_RSA_FULL, CRYPT_VERIFYCONTEXT)) { if(CryptGenRandom(hCryptProv, 8, pbData)) { std::string s(( const char *) pbData); printf(s.c_str()); } else { MyHandleError("Error during CryptGenRandom."); } } else { MyHandleError("Error during CryptAcquireContext!\n"); } i tried this code but, its not working quite well (i get it from msdn) and this example don't work for me ( http://www.codeproject.com/KB/security/plaintextsessionkey.aspx ) so if anyone know how to generate sessionid using this class plz let me know tnx anyway!

    Read the article

  • Quick way to do data lookup in PHP

    - by Ghostrider
    I have a data table with 600,000 records that is around 25 megabytes large. It is indexed by a 4 byte key. Is there a way to find a row in such dataset quickly with PHP without resorting to MySQL? The website in question is mostly static with minor PHP code and no database dependencies and therefore fast. I would like to add this data without having to use MySQL if possible. In C++ I would memory map the file and do a binary search in it. Is there a way to do something similar in PHP?

    Read the article

  • How can I get Perl to detect the bad UTF-8 sequences?

    - by gorilla
    I'm running Perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class. These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5 I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag and ignore the bad titles. if(not utf8::valid($title)){ $title="Invalid UTF-8"; } $data->title($title); $data->update(); However Perl seems to think that the strings are valid, but it still throws the exceptions. How can I get Perl to detect the bad UTF-8?

    Read the article

  • how to cast an array of char into a single integer number?

    - by SepiDev
    Hi guys, i'm trying to read contents of PNG file. As you may know, all data is written in a 4-byte manner in png files, both text and numbers. so if we have number 35234 it is save in this way: [1000][1001][1010][0010]. but sometimes numbers are shorter, so the first bytes are zero, and when I read the array and cast it from char* to integer I get wrong number. for example [0000] [0000] [0001] [1011] sometimes numbers are misinterpreted as negative numbers and simetimes as zero! let me give you an intuitive example: char s_num[4] = {120, 80, 40, 1}; int t_num = 0; t_num = int(s_num); => 3215279148 ?????? the result should be 241 but the output is 3215279148? I wish I could explain my problem well! how can i cast such arrays into a single integer value?

    Read the article

  • Powershell Select-Object from array not working

    - by Andrew
    I am trying to seperate values in an array so i can pass them to another function. Am using the select-Object function within a for loop to go through each line and separate the timestamp and value fields. However, it doesn't matter what i do the below code only displays the first select-object variable for each line. The second select-object command doesn't seem to work as my output is a blank line for each of the 6 rows. Any ideas on how to get both values $ReportData = $SystemStats.get_performance_graph_csv_statistics( (,$Query) ) ### Allocate a new encoder and turn the byte array into a string $ASCII = New-Object -TypeName System.Text.ASCIIEncoding $csvdata = $ASCII.GetString($ReportData[0].statistic_data) $csv2 = convertFrom-CSV $csvdata $newarray = $csv2 | Where-Object {$_.utilization -ne "0.0000000000e+00" -and $_.utilization -ne "nan" } for ( $n = 0; $n -lt $newarray.Length; $n++) { $nTime = $newarray[$n] $nUtil = $newarray[$n] $util = $nUtil | select-object Utilization $util $tstamp = $nTime | select-object timestamp $tstamp }

    Read the article

  • in java, which is better - three arrays of booleans or 1 array of bytes?

    - by joe_shmoe
    I know the question sounds silly, but consider this: I have an array of items and a labelling algorithm. at any point the item is in one of three states. The current version holds these states in a byte array, where 0, 1 and 2 represent the three states. alternatively, I could have three arrays of boolean - one for each state. which is better (consumes less memory) depends on how jvm (sun's version) stores the arrays - is a boolean represented by 1 bit? (p.s. don't start with all that "this is not the way OO/Java works" - I know, but here performance comes in front. plus the algorithm is simple and perfectly readable even in such form). Thanks a lot

    Read the article

  • C# string equality operator returns false, but I'm pretty sure it should be true... What?!

    - by Daniel Schaffer
    I'm trying to write a unit test for a piece of code that generates a large amount of text. I've run into an issue where the "expected" and "actual" strings appear to be equal, but Assert.AreEqual throws, and both the equality operator and Equals() return false. The result of GetHashCode() is different for both values as well. However, putting both strings into text files and comparing with DiffMerge tells me they're the same. Additionally, using Encoding.ASCII.GetBytes() on both values and then using SequenceEquals to compare the resulting byte arrays returns true. The values are 34KB each, so I'll hold off putting them here for now. Any ideas? I'm completely stumped.

    Read the article

  • How to receive a datastream from a device on your computer, in C#

    - by WebDevHobo
    I plan to build a small audio-recorder app in C#. My laptop has a built in Microphone that's always active, so I want to use that as an early-stage test. I would simply start recording, save the file as a .wav or even use the LAME dll to make it into an MP3. The problem is, I don't know how to contact that microphone. Do I use a library that can detect a device, or do I just catch a stream of bytes from the port that the device is on? I don't have any experience with receiving data from connected devices. I suppose that I'll need to enter all the data into a byte array and then Serialize that into a WAV file, but I'm not sure. Can I get some pointers on this subject?

    Read the article

  • Multi-threaded downloader in C# question

    - by blez
    Currently I have multi-threaded downloader class that uses HttpWebRequest/Response. All works fine, it's super fast, BUT.. the problem is that the data needs to be streamed while it's downloading to another app. That means that it must be streamed in the right order, the first chunk first, and then the next in the queue. Currently my downloader class is sync and Download() returns byte[]. In my async multi-threaded class I make for example, list with 4 empty elements (for slots) and I pass each index of the slot to each thread using the Download() function. That simulates synchronization, but that's not what I need. How should I do the queue thing, to make sure the data is streamed as soon as the first chunk start downloading.

    Read the article

  • KeybListener problem

    - by rgksugan
    In my apllication i am using a jpanel in which i want to add a key listener. I did it. But it doesnot work. Is it because i am using a swingworker to update the contents of the panel every second. Here is my code to update the panel RenderedImage image = ImageIO.read(new ByteArrayInputStream((byte[]) get())); Graphics graphics = remote.rdpanel.getGraphics(); if (graphics != null) { Image readyImage = new ImageIcon(UtilityFunctions.convertRenderedImage(image)).getImage(); graphics.drawImage(readyImage, 0, 0, remote.rdpanel.getWidth(), remote.rdpanel.getHeight(), null); }

    Read the article

  • What platforms have something other than 8-bit char?

    - by Craig McQueen
    Every now and then, someone on SO points out that char (aka 'byte') isn't necessarily 8 bits. It seems that 8-bit char is almost universal. I would have thought that for mainstream platforms, it is necessary to have an 8-bit char to ensure its viability in the marketplace. Both now and historically, what platforms use a char that is not 8 bits, and why would they differ from the "normal" 8 bits? When writing code, and thinking about cross-platform support (e.g. for general-use libraries), what sort of consideration is it worth giving to platforms with non-8-bit char? In the past I've come across some Analog Devices DSPs for which char is 16 bits. DSPs are a bit of a niche architecture I suppose. (Then again, at the time hand-coded assembler easily beat what the available C compilers could do, so I didn't really get much experience with C on that platform.)

    Read the article

  • Me As Child Type In General Function

    - by Steven
    I have a MustInherit Parent class with two Child classes which Inherit from the Parent. How can I use (or Cast) Me in a Parent function as the the child type of that instance? EDIT: My actual goal is to be able to serialize (BinaryFormatter.Serialize(Stream, Object)) either of my child classes. However, "repeating the code" in each child "seems" wrong. EDIT2: This is my Serialize function. Where should I implement this function? Copying and pasting to each child doesn't seem right, but casting the parent to a child doesn't seem right either. Public Function Serialize() As Byte() Dim bFmt As New BinaryFormatter() Dim mStr As New MemoryStream() bFmt.Serialize(mStr, Me) Return mStr.ToArray() End Function

    Read the article

  • How do I write raw binary data in Python?

    - by Chris B.
    I've got a Python program that stores and writes data to a file. The data is raw binary data, stored internally as str. I'm writing it out through a utf-8 codec. However, I get UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 25: character maps to <undefined> in the cp1252.py file. This looks to me like Python is trying to interpret the data using the default code page. But it doesn't have a default code page. That's why I'm using str, not unicode. I guess my questions are: How do I represent raw binary data in memory, in Python? When I'm writing raw binary data out through a codec, how do I encode/unencode it?

    Read the article

  • Map derived class as an independent one with FNH's Automap

    - by Anton Gogolev
    Hi! Basically, I have an ImageMetadata class and an Image class, which derives from ImageMetadata. Image adds one property: byte[] Content, which actually contains binary data. What I want to do is to map these two classes onto one table, but I absolutely do not need NHibernates' inheritance support to kick in. I want to tailor FNH Automap to produce something like: <class name="ImageMetadata" ...> <property name="Name" ... /> < ... /> <class name="Image" ...> <property name="Name" ... /> <property name="Content" ... /> < ... /> Is this at all possible?

    Read the article

  • Are C++ Reads and Writes of an int atomic

    - by theschmitzer
    I have two threads, one updating an int and one reading it. This value is a statistic where the order of the read and write is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of value = ox0000FFFF increment value to 0x00010000 Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this is I've always synchronized these types of accesses, but was curious what the community thought.

    Read the article

  • Java String to SHA1

    - by AeroDroid
    I'm trying to make a simple String to SHA1 converter in Java and this is what I've got... public static String toSHA1(byte[] convertme) { MessageDigest md = null; try { md = MessageDigest.getInstance("SHA-1"); } catch(NoSuchAlgorithmException e) { e.printStackTrace(); } return new String(md.digest(convertme)); } When I pass it toSHA1("password".getBytes()), I get "[?a?????%l?3~??." I know it's probably a simple encoding fix like UTF-8, but could someone tell me what I should do to get what I want which is "5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8"? Or am I doing this completely wrong? Thanks a lot!

    Read the article

  • F# powerpack and distribution

    - by rwallace
    I need arbitrary precision rational numbers, which I'm given to understand are available in the F# powerpack. My question is about the mechanics of distribution; my program needs to be able to compile and run both on Windows/.Net and Linux/Mono at least, since I have potential users on both platforms. As I understand it, the best procedure is: Download the powerpack .zip, not the installer. Copy the DLL into my program directory. Copy the accompanying license file into my program directory, to make sure everything is above board. Declare references and go ahead and use the functions I need. Ship the above files along with my source and binary, and since the DLL uses byte code, it will work fine on any platform. Is this the correct procedure? Am I missing anything?

    Read the article

  • Screen capture code produces black bitmap

    - by wadetandy
    I need to add the ability to take a screenshot of the entire screen, not just the current window. The following code produces a bmp file with the correct dimensions, but the image is completely black. What am I doing wrong? void CaptureScreen(LPCTSTR lpszFilePathName) { BITMAPFILEHEADER bmfHeader; BITMAPINFO *pbminfo; HBITMAP hBmp; FILE *oFile; HDC screen; HDC memDC; int sHeight; int sWidth; LPBYTE pBuff; BITMAP bmp; WORD cClrBits; RECT rcClient; screen = GetDC(0); memDC = CreateCompatibleDC(screen); sHeight = GetDeviceCaps(screen, VERTRES); sWidth = GetDeviceCaps(screen, HORZRES); //GetObject(screen, sizeof(BITMAP), &bmp); hBmp = CreateCompatibleBitmap ( screen, sWidth, sHeight ); // Retrieve the bitmap color format, width, and height. GetObject(hBmp, sizeof(BITMAP), (LPSTR)&bmp) ; // Convert the color format to a count of bits. cClrBits = (WORD)(bmp.bmPlanes * bmp.bmBitsPixel); if (cClrBits == 1) cClrBits = 1; else if (cClrBits bmiHeader.biSize = sizeof(BITMAPINFOHEADER); pbminfo-bmiHeader.biWidth = bmp.bmWidth; pbminfo-bmiHeader.biHeight = bmp.bmHeight; pbminfo-bmiHeader.biPlanes = bmp.bmPlanes; pbminfo-bmiHeader.biBitCount = bmp.bmBitsPixel; if (cClrBits bmiHeader.biClrUsed = (1bmiHeader.biCompression = BI_RGB; // Compute the number of bytes in the array of color // indices and store the result in biSizeImage. // The width must be DWORD aligned unless the bitmap is RLE // compressed. pbminfo-bmiHeader.biSizeImage = ((pbminfo-bmiHeader.biWidth * cClrBits +31) & ~31) /8 * pbminfo-bmiHeader.biHeight; // Set biClrImportant to 0, indicating that all of the // device colors are important. pbminfo-bmiHeader.biClrImportant = 0; CreateBMPFile(lpszFilePathName, pbminfo, hBmp, memDC); } void CreateBMPFile(LPTSTR pszFile, PBITMAPINFO pbi, HBITMAP hBMP, HDC hDC) { HANDLE hf; // file handle BITMAPFILEHEADER hdr; // bitmap file-header PBITMAPINFOHEADER pbih; // bitmap info-header LPBYTE lpBits; // memory pointer DWORD dwTotal; // total count of bytes DWORD cb; // incremental count of bytes BYTE *hp; // byte pointer DWORD dwTmp; int lines; pbih = (PBITMAPINFOHEADER) pbi; lpBits = (LPBYTE) GlobalAlloc(GMEM_FIXED, pbih-biSizeImage); // Retrieve the color table (RGBQUAD array) and the bits // (array of palette indices) from the DIB. lines = GetDIBits(hDC, hBMP, 0, (WORD) pbih-biHeight, lpBits, pbi, DIB_RGB_COLORS); // Create the .BMP file. hf = CreateFile(pszFile, GENERIC_READ | GENERIC_WRITE, (DWORD) 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, (HANDLE) NULL); hdr.bfType = 0x4d42; // 0x42 = "B" 0x4d = "M" // Compute the size of the entire file. hdr.bfSize = (DWORD) (sizeof(BITMAPFILEHEADER) + pbih-biSize + pbih-biClrUsed * sizeof(RGBQUAD) + pbih-biSizeImage); hdr.bfReserved1 = 0; hdr.bfReserved2 = 0; // Compute the offset to the array of color indices. hdr.bfOffBits = (DWORD) sizeof(BITMAPFILEHEADER) + pbih-biSize + pbih-biClrUsed * sizeof (RGBQUAD); // Copy the BITMAPFILEHEADER into the .BMP file. WriteFile(hf, (LPVOID) &hdr, sizeof(BITMAPFILEHEADER), (LPDWORD) &dwTmp, NULL); // Copy the BITMAPINFOHEADER and RGBQUAD array into the file. WriteFile(hf, (LPVOID) pbih, sizeof(BITMAPINFOHEADER) + pbih-biClrUsed * sizeof (RGBQUAD), (LPDWORD) &dwTmp, ( NULL)); // Copy the array of color indices into the .BMP file. dwTotal = cb = pbih-biSizeImage; hp = lpBits; WriteFile(hf, (LPSTR) hp, (int) cb, (LPDWORD) &dwTmp,NULL); // Close the .BMP file. CloseHandle(hf); // Free memory. GlobalFree((HGLOBAL)lpBits); }

    Read the article

  • C++ USB Programming

    - by JB_SO
    Hi, I am new to hardware programming(especially USB) so please bear with me and my questions. I am using C++ and I need to send/receive some data (a byte array) to/from a USB port on a microprocessor board. Now, I have done some serial port programming before and I know that for a serial port you have to open a port, setup, perform i/o and finally close the port. I am guessing to use a USB port, it is not as simple as what I mentioned above. I do know that I want to use Microsoft standard drivers and implement standard Windows IO commands to accomplish this, since I believe there are no drivers for the microprocessor board for me to interact with. If somebody can point me in the right direction as to the steps needed to "talk" to a USB port (open, setup, i/o) via standard Windows IO commands, I would truly and greatly appreciate it. Thanks you so much!!

    Read the article

  • Appropriate data structure for a buffer of the packets

    - by psihodelia
    How to implement a buffer of the packets where each packet is of the form: typedef struct{ int32 IP; //4-byte IP-address int16 ID; //unique sequence id }t_Packet; What should be the most appropriate data structure which: (1) allows to collect at least 8000 such packets (fast Insert and Delete operations) (2) allows very fast filtering using IP address, so that only packets with given IP will be selected (3) allows very fast find operation using ID as a key (4) allows very fast (2), then (3) within filtered results ? RAM size does matter, e.g. no huge lookup table is possible to use.

    Read the article

  • How to convert from integer to unsigned char in C, given integers larger than 256?

    - by Alf_InPogform
    As part of my CS course I've been given some functions to use. One of these functions takes a pointer to unsigned chars to write some data to a file (I have to use this function, so I can't just make my own purpose built function that works differently BTW). I need to write an array of integers whose values can be up to 4095 using this function (that only takes unsigned chars). However am I right in thinking that an unsigned char can only have a max value of 256 because it is 1 byte long? I therefore need to use 4 unsigned chars for every integer? But casting doesn't seem to work with larger values for the integer. Does anyone have any idea how best to convert an array of integers to unsigned chars?

    Read the article

  • Loading xml with encoding UTF 16 using XDocument

    - by Sangram
    Hi, I am trying to read the xml document using XDocument method . but i am getting an error when xml has <?xml version="1.0" encoding="utf-16"?> When i removed encoding manually.It works perfectly. I am getting error " There is no Unicode byte order mark. Cannot switch to Unicode. " i tried searching and i landed up here-- Why does C# XmlDocument.LoadXml(string) fail when an XML header is included? But could not solve my problem. My code : XDocument xdoc = XDocument.Load(path); Any suggestions ?? thank you.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >