Search Results

Search found 9387 results on 376 pages for 'double byte'.

Page 319/376 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Using different numeric variable types

    - by DataPimp
    Im still pretty new so bear with me on this one, my question(s) are not meant to be argumentative or petty but during some reading something struck me as odd. Im under the assumption that when computers were slow and memory was expensive using the correct variable type was much more of a necessity than it is today. Now that memory is a bit easier to come by people seem to have relaxed a bit. For example, you see this sample code everywhere: for (int i = 0; i < length; i++) int? (-2,147,483,648 to 2,147,483,648) for length? Isnt byte (0-255) a better choice? So Im curious of your opinion and what you believe to be best practice, I hate to think this would be used only because the acronym "int" is more intuitive for a beginner...or has memory just become so cheap that we really dont need to concern ourselves with such petty things and therefore we should just use long so we can be sure any other numbers/types(within reason) used can be cast automagically? ...or am Im just being silly by concerning myself with such things?

    Read the article

  • Reading text files line by line, with exact offset/position reporting

    - by Benjamin Podszun
    Hi. My simple requirement: Reading a huge ( a million) line test file (For this example assume it's a CSV of some sorts) and keeping a reference to the beginning of that line for faster lookup in the future (read a line, starting at X). I tried the naive and easy way first, using a StreamWriter and accessing the underlying BaseStream.Position. Unfortunately that doesn't work as I intended: Given a file containing the following Foo Bar Baz Bla Fasel and this very simple code using (var sr = new StreamReader(@"C:\Temp\LineTest.txt")) { string line; long pos = sr.BaseStream.Position; while ((line = sr.ReadLine()) != null) { Console.Write("{0:d3} ", pos); Console.WriteLine(line); pos = sr.BaseStream.Position; } } the output is: 000 Foo 025 Bar 025 Baz 025 Bla 025 Fasel I can imagine that the stream is trying to be helpful/efficient and probably reads in (big) chunks whenever new data is necessary. For me this is bad.. The question, finally: Any way to get the (byte, char) offset while reading a file line by line without using a basic Stream and messing with \r \n \r\n and string encoding etc. manually? Not a big deal, really, I just don't like to build things that might exist already..

    Read the article

  • Python line file iteration and strange characters

    - by muckabout
    I have a huge gzipped text file which I need to read, line by line. I go with the following: for i, line in enumerate(codecs.getreader('utf-8')(gzip.open('file.gz'))): print i, line At some point late in the file, the python output diverges from the file. This is because lines are getting broken due to weird special characters that python thinks are newlines. When I open the file in 'vim', they are correct, but the suspect characters are formatted weirdly. Is there something I can do to fix this? I've tried other codecs including utf-16, latin-1. I've also tried with no codec. I looked at the file using 'od'. Sure enough, there are \n characters where they shouldn't be. But, the "wrong" ones are prepended by a weird character. I think there's some encoding here with some characters being 2-bytes, but the trailing byte being a \n if not viewed properly. If I replace: gzip.open('file.gz') With: os.popen('zcat file.gz') It works fine (and actually, quite faster). But, I'd like to know where I'm going wrong.

    Read the article

  • An Ideal Keyboard Layout for Programming

    - by Jon Purdy
    I often hear complaints that programming languages that make heavy use of symbols for brevity, most notably C and C++ (I'm not going to touch APL), are difficult to type because they require frequent use of the shift key. A year or two ago, I got tired of it myself, downloaded Microsoft's Keyboard Layout Creator, made a few changes to my layout, and have not once looked back. The speed difference is astounding; with these few simple changes I am able to type C++ code around 30% faster, depending of course on how hairy it is; best of all, my typing speed in ordinary running text is not compromised. My questions are these: what alternate keyboard layouts have existed for programming, which have gained popularity, are any of them still in modern use, do you personally use any altered layout, and how can my layout be further optimised? I made the following changes to a standard QWERTY layout. (I don't use Dvorak, but there is a programmer Dvorak layout worth mentioning.) Swap numbers with symbols in the top row, because long or repeated literal numbers are typically replaced with named constants; Swap backquote with tilde, because backquotes are rare in many languages but destructors are common in C++; Swap minus with underscore, because underscores are common in identifiers; Swap curly braces with square brackets, because blocks are more common than subscripts; and Swap double quote with single quote, because strings are more common than character literals. I suspect this last is probably going to be the most controversial, as it interferes the most with running text by requiring use of shift to type common contractions. This layout has significantly increased my typing speed in C++, C, Java, and Perl, and somewhat increased it in LISP and Python.

    Read the article

  • Groovy / Scala / Java under the hood

    - by Jack
    I used Java for like 6-7 years, then some months ago I discovered Groovy and started to save a lot of typing.. then I wondered how certain things worked under the hood (because groovy performance is really poor) and understood that to give you dynamic typing every Groovy object is a MetaClass object that handles all the things that the JVM couldn't handle by itself. Of course this introduces a layer in the middle between what you write and what you execute that slows down everything. Then somedays ago I started getting some infos about Scala. How these two languages compare in their byte code translations? How much things they add to the normal structure that it would be obtained by plain Java code? I mean, Scala is static typed so wrapper of Java classes should be lighter, since many things are checked during compile time but I'm not sure about the real differences of what's going inside. (I'm not talking about the functional aspect of Scala compared to the other ones, that's a different thing) Can someone enlighten me? From WizardOfOdds it seems like that the only way to get less typing and same performance would be to write an intermediate translator that translates something in Java code (letting javac compile it) without alterating how things are executed, just adding synctatic sugar withour caring about other fallbacks of the language itself.

    Read the article

  • Fastest way to convert a list of doubles to a unique list of integers?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • Where to post code for open source usage?

    - by Douglas
    I've been working for a few weeks now with the Google Maps API v3, and have done a good bit of development for the map I've been creating. Some of the things I've done have had to be done to add usability where there previously was not any, at least not that I could find online. Essentially, I made a list of what had to be done, searched all over the web for the ways to do what I needed, and found that some were not(at the time) possible(in the "grab an example off the web" sense). Thus, in my working on this map, I have created a number of very useful tools, which I would like to share with the development community. Is there anywhere I could use as a hub, apart from my portfolio ( http://dougglover.com ), to allow people to view and recycle my work? I know how hard it can be to need to do something, and be unable to find the solution elsewhere, and I don't think that if something has been done before, it should necessarily need to be written again and again. Hence open source code, right? Firstly, I was considering coming on here and asking a question, and then just answering it. Problem there is I assume that would just look like a big reputation grab. If not, please let me know and I'll go ahead and do that so people here can see it. Other suggestions appreciated. Some stuff I've made: A (new and improved) LatLng generator Works quicker, generates LatLng based on position of a draggable marker Allows searching for an address to place the marker on/near the desired location(much better than having to scroll to your location all the way from Siberia) Since it's a draggable marker, double-clicking zooms in, instead of creating a new LatLng marker like the one I was originally using The ability to create entirely custom "Smart Paths" Plot LatLng points on the map which connect to each other just like they do using the actual Google Maps Using Dijkstra's algorithm with Javascript, the routing is intelligent and always gives the shortest possible route, using the points provided Simple, easy to read multi-dimensional array system allows for easily adding new points to the grid Any suggestions, etc. appreciated.

    Read the article

  • Downloading attachments to directory with IMAP in PHP, randomly works

    - by Nick
    I found PHP code online to download attachments to a directory using IMAP from here. http://www.nerdydork.com/download-pop3imap-email-attachments-with-php.html I modified it slightly changing $structure = imap_fetchstructure($mbox, $jk); $parts = ($structure->parts); to $structure = imap_fetchstructure($mbox, $jk); $parts = ($structure); to get it to run properly, as otherwise I got an error about how stdClass doesn't define a property called $parts. Doing that, I was able to download all the attachments. I tested it again recently though, and it didn't work. Well, it didn't work 6 times, worked the 7th, and then hasn't worked since. I'm thinking it has something to do with me screwing up the parts handling, since count($parts) keeps returning 1 for each message, so it's not finding any attachments I think. Since it downloaded the attachments at one point with no issues, I feel confident that the area things are getting screwed up is right here. Before this block of code is a for loop that goes through each message in the box, and after it is loop that just goes through $parts for each imap structure. Thanks for any help you can provide. I looked at the imap_fetchstructure page on php.net and can't figure out what I'm doing wrong. Edit: I just double-checked the folder after typing up my question and it all popped up. I feel like I'm going nuts. I hadn't run the code since a few minutes before I started typing this, and it doesn't make sense to me that it would take this long to trigger. I have some 800 messages in the mailbox, but I figured since it printed my statement at the very end of the PHP that all of the file creation work was done.

    Read the article

  • Using SHFileOperation: What errors are occuring

    - by Sascha
    Hello I am using the function SHFileOperation() to send a file to the recycling bin. And I am getting 2 errors that I do not know what they mean because with this function the error codes are not GetLastError() values. Now when the function SHFileOperation() fails the return values are 0x57 (decimal 87) & 0x2 (decimal 2). Can you help me discover the definitions of these errors (expecially when you consider with this function, the errors are not part of the GetLastError() codes). Some important information: • I am using Windows 7 (& I know that MSDN says to use IFileOperation instead of SHFileOperation but I want to make my app backwards compatable which is why I am using SHFileOperation). If the error is occuring because I am using SHFileOperation on Windows 7 what solution could I use to make this work on all versions of windows from 2000 & up? • I have debugged extensively & as far as I know my SHFILEOPSTRUCT is correct (correct flags used, .pFrom is a double-null ended string). One thing I know for sure is that my path to the file is correct (leads to a real file & it correctly formatted). • About 2/5 times the SHFileOperation() works, meaning it sends the file to the recycle bin & does not returns an error BOOL result; SHFILEOPSTRUCT fileStruct; fileStruct.hwnd = hwnd; fileStruct.wFunc = FO_DELETE; fileStruct.pFrom = dest.c_str(); fileStruct.fFlags = FOF_FILESONLY; // FOF_ALLOWUNDO fileStruct.fAnyOperationsAborted = result; // Call operation(delete file) int success = SHFileOperation( &fileStruct ); // if delete was successful if ( success != 0 ) { printf( "%s \t %X %d \n", dest.c_str(), success, success ); cout << result << endl; MessageBox( hwnd, "Failed to delete file", "Error", MB_OK|MB_ICONERROR ); return; }

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • Static Libraries on iPhone device

    - by Akusete
    I have two projects, a Cocoa iPhone application and a static library which it uses. I've tested it successfully on the iPhone simulator, but when I try to deploy it to my iPhone device I get (symbol not found) link errors. If I remove the dependancy of the library the project builds/runs fine. I have made sure all the build settings are set to iPhoneOS not the simulator. Im sure its something simple, but has anyone run into similar problems moving from iPhone simulator to device? --EDIT: I have managed to create new projects (one for the application and one for the static library), and successfully get them to run on the iPhone or simulator. But I have a very strange problem... for each specific project I cannot get it working for BOTH the device and the simulator... I have double checked the build settings, made sure the libraries that are being references are for the matching build settings (I believe) but I cannot resolve these linking errors. I think I must be doing something very wrong... all the apple documentation says 'its super simple - one click' but this is giving me a lot of problems. This is probably something to do with xCode build settings, but I cannot seem to understand why selecting the different build platforms and rebuilding the libraries does not work.

    Read the article

  • How to create a new widget for dojox.grid.cells.dijit?

    - by the_drow
    I am trying to create a button widget for dojox.grid. My problems are: 1) The button is only shown when I double click the grid. 2) I can't figure out how to set attributes through declarative markup. It seems that the markupFactory function is responsible for it but it doesn't set the widget's label. The following code demonstrates what I've got so far: dojo.require("dojox.grid.DataGrid"); dojo.require("dojo.data.ItemFileWriteStore"); dojo.require("dijit.form.Button"); dojo.require("dojox.grid.cells.dijit"); dojo.require("dojo.parser"); dojo.declare("dojox.grid.cells.Button", dojox.grid.cells._Widget, { widgetClass: dijit.form.Button, alwaysEditing: true, constructor: function(inCell) { this.inherited(arguments); this.widget = new dijit.form.Button; }, setValue: function(inRowIndex, inValue){ if (this.widget) { this.widget.attr('value', inValue); } else { this.inherited(arguments); } } }); dojox.grid.cells.Button.markupFactory = function(node, cell) { dojox.grid.cells._Widget.markupFactory(node, cell); }

    Read the article

  • recommended format to save time with MJD + BCD format in database

    - by pierr
    Hi, There is a time represented in MJD and BCD format with 5 bytes .I am wondering what is the recommended format to save this date-time in the sqlite database so that user can search against it ? My first attempt is to save it just as it is, that is a 5 bytes string. The user will use the same format to search and the result will be converted to unix time by the user with following code. However, later, I was suggested to save the time in the integer - the UTC time, for example. But I can not find a standard way to do the conversion. I feel this is a common issue and would like to hear your comments. time_t sidate_to_unixtime(unsigned char sidate[]) { int k = 0; struct tm tm; double mjd; /* check for the undefined value */ if ((sidate[0] == 0xff) && (sidate[1] == 0xff) && (sidate[2] == 0xff) && (sidate[3] == 0xff) && (sidate[4] == 0xff)) { return -1; } memset(&tm, 0, sizeof(tm)); mjd = (sidate[0] << 8) | sidate[1]; tm.tm_year = (int) ((mjd - 15078.2) / 365.25); tm.tm_mon = (int) (((mjd - 14956.1) - (int) (tm.tm_year * 365.25)) / 30.6001); tm.tm_mday = (int) mjd - 14956 - (int) (tm.tm_year * 365.25) - (int) (tm.tm_mon * 30.6001); if ((tm.tm_mon == 14) || (tm.tm_mon == 15)) k = 1; tm.tm_year += k; tm.tm_mon = tm.tm_mon - 2 - k * 12; tm.tm_sec = bcd_to_integer(sidate[4]); tm.tm_min = bcd_to_integer(sidate[3]); tm.tm_hour = bcd_to_integer(sidate[2]); return mktime(&tm); }

    Read the article

  • Does anyone have documentation on SHGetSysColor?

    - by Paulo Santos
    I'm trying to find any reference for this function, but I haven't found anything. All I have is an obscure KB from Microsoft referencing that a programmer made boo-boo when coding a part of the Windows Mobile 6 where he should call SHGetSysColor but instead he called GetSysColor that gives a complete different color, for the same spec. From what I could gather the GetSysColor read a color value in the registry from HKEY_LOCAL_MACHINE\Software\Microsoft\Color\SHColor or HKEY_LOCAL_MACHINE\Software\Microsoft\Color\DefSHColor and returns the color according to the index. In that registry I have the following value for a standard Win Mobile 6.5 "DefSHColor"=hex:\ ff,00,00,00,00,00,00,00,dd,dd,dd,00,ff,ff,cc,00,ff,ff,ff,00,15,af,bc,00,15,\ af,bc,00,c9,e7,e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00,14,9c,a7,00,14,9c,\ a7,00,15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,c9,e7,e9,00,37,c7,d3,00,37,c7,d3,\ 00,ff,ff,ff,00,00,b7,c9,00,14,9c,a7,00,ff,ff,ff,00,15,af,bc,00,84,84,c3,00,\ 15,af,bc,00,14,9c,a7,00,ff,ff,ff,00,ff,ff,ff,00,00,00,00,00,ff,ff,ff,00,00,\ 00,00,00,ff,ff,ff,00,2e,44,4f,00,00,14,3c,00,00,f0,ff,00,ff,ff,ff,00,c9,e7,\ e9,00,14,9c,a7,00,ff,ff,ff,00,14,9c,a7,00 And I realized that each four bytes represents a different color (RR,GG,BB,AA -- The AA I'm assuming here, as every color there has the AA byte as 00 which would mean that it's a solid color). What I can't get a fix on is what each index mean, as I have 41 different colors in there. Googling for SHGetSysColor in gives me only 7 matches, two of them are the KB from Microsoft (one in English, the other in French) one is from a Russian site (which I don't read), yet another two are from the freepascal.org and one from Koders.com that is describing the commctl.def file. I went to the commctl.h trying to see if I could find reference tom this function, and found absolutely nothing. No search on MSDN, either fro Google, Bing, or the default MSDN search gave me any result. So, does anyone know what indexes are we talking about here?

    Read the article

  • Stored procedure woes ... inserting binary ...

    - by Wardy
    Ok so I have this storedproc in my SQL 2008 database (works in 2005 too / used to) ... CREATE PROCEDURE [dbo].[SetBinaryContent] @Ref nvarchar(50), @Content varbinary(MAX), @ObjectID uniqueidentifier AS BEGIN DELETE ObjectContent WHERE ObjectId = @ObjectID AND Ref = @Ref IF DATALENGTH(@Content) > 5 BEGIN INSERT INTO ObjectContent (Ref,BinaryContent,ObjectId) VALUES (@Ref,@Content,@ObjectId) END UPDATE Objects SET [Status] = 1 WHERE ID = @ObjectID END Relatively simple, I take a byte array in C# and chuck it in @Content i then give it a guid and string for the other params and off we go. ... Great, it used to work ... but it don't anymore ... so erm ... What's wrong with this stored proc? I've stepped through my C# code thinking I screwed up somehow in that but it definately adds the params and gives them the correct values so what would cause the server to just stop executing this storedproc correctly? When called this proc executes but nothing changes in the db ... no new records are added to the ObjectContent table. Weird huh ...

    Read the article

  • Display new image on web page

    - by RuiT
    Hello, I'm uploading images to a folder and I want to display each new image on a web page every time it is saved in the folder. I can display the initial image, which is already in the folder, and I can also detect when a new image is saved into the folder but I don't know how to display the new images. I'm new to web development. Can somebody help me? Here's the code: public partial class _Default : System.Web.UI.Page { string DirectoryPath = "C:\\Users\\Desktop\\PhotoUpload\\Uploads\\"; protected void Page_Load(object sender, EventArgs e) { FileSystemWatcher watcher = new FileSystemWatcher(); try { watcher.Path = DirectoryPath; watcher.Created += new FileSystemEventHandler(watcher_Created); watcher.EnableRaisingEvents = true; Image image = Image.FromFile(DirectoryPath + "inicial.jpg"); MemoryStream ms = new MemoryStream(); image.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); byte[] im = ms.ToArray(); Context.Response.ContentType = "Image/JPG"; Context.Response.BinaryWrite(im); } catch (Exception ex) { } } void watcher_Created(object sender, FileSystemEventArgs e) { Console.WriteLine("File Created: Name: " + e.Name); try { //How to display new image? } catch (Exception ex) { } } }

    Read the article

  • Trouble when changing pixel data with alpha on png on iphone --okay on simulator

    - by Ted
    I'm trying to change the color of the pixels (lighten or darken) without changing the value of the alpha channel using CGDataProviderCopyData. I leave every 4th databyte untouched. It work fine of the iphone simulator, however on the real thing the alpha goes white as I increase the values of the other pixels. I've tried changing just the first byte, or the second, or the third. Does anybody have any idea what is going on? The basic code is borrowed from Jorge. I like this simple approach --I'm new to this. But I want to make it work with png images with some transparency. here is most of the code by Jorge : CFDataRef CopyImagePixels(CGImageRef inImage){ return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); } CGImageRef img=originalImage.CGImage; CFDataRef dataref=CopyImagePixels(img); UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref); int length=CFDataGetLength(dataref); for(int index=0;index255){ data[index+i]=255; }else{ data[index+i]+=value; } } } } size_t width=CGImageGetWidth(img); size_t height=CGImageGetHeight(img); size_t bitsPerComponent=CGImageGetBitsPerComponent(img); size_t bitsPerPixel=CGImageGetBitsPerPixel(img); size_t bytesPerRow=CGImageGetBytesPerRow(img); CGColorSpaceRef colorspace=CGImageGetColorSpace(img); CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img); CGImageAlphaInfo alphaInfo = kCGBitmapAlphaInfoMask(img); NSLog(@"bitmapinfo: %d",bitmapInfo); CFDataRef newData=CFDataCreate(NULL,data,length); CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData); CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault); [iv setImage:[UIImage imageWithCGImage:newImg]]; CGImageRelease(newImg); CGDataProviderRelease(provider);

    Read the article

  • How do you save and retrieve a Key/IV pair securely?

    - by Shawn Steward
    I'm using VB.Net's RijndaelManaged (RM) to encrypt files, using the RM.GenerateKey and RM.GenerateIV methods to generate the Key and IV and encrypting the file using the CryptoStream class. I'm planning on saving this Key and IV to a file and want to make sure I'm doing it the right way. I am combining the IV+Key, and encrypting that with my RSA Public key and writing it out to a file. Then, to decrypt I use the RSA Private key on this file to get the IV+Key, split them up and set RM.Key and RM.IV to these values and run the decryptor. Is this the best method to accomplish this, or is there a preferred method for saving the IV & Key? Also, what's the best way to construct and deconstruct the byte array? I used the .Concat method to join them together and that seems to work well but I can't seem to find something as easy to deconstruct it. I played with the .Take method that takes the first x # of bytes and it works for the first part but can't find anything that gets the rest of it.

    Read the article

  • OpenGL Tearing Problem

    - by kaykun
    Hi, I'm using win32 and opengl and I have a window set up with the projection at glOrtho of the window's coordinates. I have double buffering enabled, tested it with glGet as well. My program always seems to tear any primitives that I try to draw on it if it's being constantly translated. Here is my OpenGL initialization function: glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glViewport(0, 0, 640, 480); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, 640, 0, 480, 0, 100); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glDrawBuffer(GL_BACK); glLoadIdentity(); And this is my rendering function, gMouseX and gMouseY are the coordinates of the mouse: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glTranslatef(gMouseX, gMouseY, 0.0f); glColor3f(0.5f, 0.5f, 0.5f); glBegin(GL_TRIANGLES); glVertex2f(0.0f, 128.0f); glVertex2f(128.0f, 0.0f); glVertex2f(0.0f, 0.0f); glEnd(); SwapBuffers(hDC); The same tearing problem occurs regardless of how often the rendering function runs. Is there something I'm doing wrong or missing here? Thanks for any help.

    Read the article

  • How do I get a ComboBox SelectionChanged event to fire from a nested ListBoxItem?

    - by Stephen McCusker
    This is a rather complex problem that has me really confused right now. Any help would be greatly appreciated. The Setup: ListBox of Type A UserControls -ListBoxItem of Type A UserControl --ListBox of Type B UserControls ---ListBoxItem of Type B UserControl ----ListBox of Type C UserControls -----ListBoxItem of Type C UserControl (contains the ComboBox) In other words, the Type A control has a ListBox of Type B controls that has a ListBox of Type C controls. All of the controls are hierarchical in nature. Type A contains the data that's needed to load the Type B controls and the Type B contains the data that's needed to load the Type C controls. The Type C control has a standard ComboBox in it for changing the values of the present items. In addition to the above structure, I have drag and dropping tied to the PreviewMouseLeftButtonDown event on both the Type A and Type B UserControl levels to handle reordering/deleting/etc commands in the GUI. All of this is working as intended. The Problem: When I attempt to change the value in the ComboBox, the SelectionChanged event never fires on the Type C "level" unless I'm careful enough to click on the borders/spacing in between any Type A or B controls. This happens when my ComboBox popout menu overlaps on either a Type A or B control located below itself. The selection events for Type A or B are firing instead of the Type C events, so the ComboBox is never changing its value reliably. In the debugger, the code for handling the drag and drop is triggering on the next ListBoxItem that's located underneath the ComboBox. Thoughts: Is there a way I can make my ComboBox popup take prevalence over the items behind it while double-nested in a ListBox (ie, ignore anything behind it while it's open)? Is there some way to reroute the incorrectly firing SelectionChanged events down to the ComboBox that's supposed to be triggering them?

    Read the article

  • Windows cmd encoding change causes Python crash.

    - by Alex
    First I chage Windows CMD encoding to utf-8 and run Python interpreter: chcp 65001 python Then I try to print a unicode sting inside it and when i do this Python crashes in a peculiar way (I just get a cmd prompt in the same window). >>> import sys >>> print u'ëèæîð'.encode(sys.stdin.encoding) Any ideas why it happens and how to make it work? UPD: sys.stdin.encoding returns 'cp65001' UPD2: It just came to me that the issue might be connected with the fact that utf-8 uses multi-byte character set (kcwu made a good point on that). I tried running the whole example with 'windows-1250' and got 'ëeaî?'. Windows-1250 uses single-character set so it worked for those characters it understands. However I still have no idea how to make 'utf-8' work here. UPD3: Oh, I found out it is a known Python bug. I guess what happens is that Python copies the cmd encoding as 'cp65001 to sys.stdin.encoding and tries to apply it to all the input. Since it fails to understand 'cp65001' it crushes on any input that contains non-ascii characters.

    Read the article

  • Write Null/Nothing value with Databinding

    - by clawson
    I have extended a MaskedTextBox component to add some functionality. The text property of the extended MaskedTextBox is bound to a DateTime? property and the format of binding is set to a time format of "HH:mm:ss" (i.e. 24hr time). So that this masked text box will capture the display a time. The extra functionality I have added is to make the component readonly unless the component is double clicked or the enter button is pressed (the back color of the control helps to inform the users if the component is locked/readonly or not). When the enter button is pressed I also suspend the bindings so that bound data is updated the users input won't be lost. The information is then written back to the value and databindings resumed when the user presses the enter key again. This all works fine up to here, with values written and displayed as would be expected. However, I also want to write the null or nothing value to the DateTime? property if the user hasn't entered any text (or invalid text but let's just stick with no text) when enter key is pressed to submit the new value. Unlike with other valid entries in the MaskedTextBox, with no text entered when i execute: Me.DataBindings("Text").WriteValue() (when 'locking' the MaskedTextBox) it then branches to the bound properties Get method as I step into each line of code in the debugger (as opposed to the Set method with other valid entries) How can I write this null/nothing/"" value to the DateTime? property when no text "" is entered into the MaskedTextBox? Thanks for the help!

    Read the article

  • ASP.NET Bind to IEnumerable

    - by JFoulkes
    Hi, I'm passing a the type IEnumerable to my view, and for each item I output a html.textbox to enter the details into. When I post this back to my controller, the collection is empty and I can't see why. public class Item { public Order Order { get; set; } public string Title { get; set; } public double Price { get; set; } } My Get method: public ActionResult AddItems(Order order) { Item itemOne = new Item { Order = order }; Item itemTwo = new Item { Order = order, }; IList<Item> items = new List<Item> { itemOne, itemTwo }; return View(items); } The View: <% int i = 0; foreach (var item in Model) { %> <p> <label for="Title">Item Title:</label> <%= Html.TextBox("items[" + i + "].Title") %> <%= Html.ValidationMessage("items[" + i + "].Title", "*")%> </p> <p> <label for="Price">Item Price:</label> <%= Html.TextBox("items[" + i + "].Price") %> <%= Html.ValidationMessage("items[" + i + "].Price", "*")%> </p> <% i++; } %> The POST method: [AcceptVerbs(HttpVerbs.Post)] public ActionResult AddItems(IEnumerable<Item> items) { try { return RedirectToAction("Index"); } catch { return View(); } } At the moment i just have a breakpoint on the post method to check what i'm gettin back.

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >