Search Results

Search found 8440 results on 338 pages for 'wms implementation'.

Page 254/338 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Is there a fast alternative to creating a Texture2D from a Bitmap object in XNA?

    - by Matthew Bowen
    I've looked around a lot and the only methods I've found for creating a Texture2D from a Bitmap are: using (MemoryStream s = new MemoryStream()) { bmp.Save(s, System.Drawing.Imaging.ImageFormat.Png); s.Seek(0, SeekOrigin.Begin); Texture2D tx = Texture2D.FromFile(device, s); } and Texture2D tx = new Texture2D(device, bmp.Width, bmp.Height, 0, TextureUsage.None, SurfaceFormat.Color); tx.SetData<byte>(rgbValues, 0, rgbValues.Length, SetDataOptions.NoOverwrite); Where rgbValues is a byte array containing the bitmap's pixel data in 32-bit ARGB format. My question is, are there any faster approaches that I can try? I am writing a map editor which has to read in custom-format images (map tiles) and convert them into Texture2D textures to display. The previous version of the editor, which was a C++ implementation, converted the images first into bitmaps and then into textures to be drawn using DirectX. I have attempted the same approach here, however both of the above approaches are significantly too slow. To load into memory all of the textures required for a map takes for the first approach ~250 seconds and for the second approach ~110 seconds on a reasonable spec computer. If there is a method to edit the data of a texture directly (such as with the Bitmap class's LockBits method) then I would be able to convert the custom-format images straight into a Texture2D and hopefully save processing time. Any help would be very much appreciated. Thanks

    Read the article

  • segfault during __cxa_allocate_exception in SWIG wrapped library

    - by lefticus
    While developing a SWIG wrapped C++ library for Ruby, we came across an unexplained crash during exception handling inside the C++ code. I'm not sure of the specific circumstances to recreate the issue, but it happened first during a call to std::uncaught_exception, then after a some code changes, moved to __cxa_allocate_exception during exception construction. Neither GDB nor valgrind provided any insight into the cause of the crash. I've found several references to similar problems, including: http://wiki.fifengine.de/Segfault_in_cxa_allocate_exception http://forums.fifengine.de/index.php?topic=30.0 http://code.google.com/p/osgswig/issues/detail?id=17 https://bugs.launchpad.net/ubuntu/+source/libavg/+bug/241808 The overriding theme seems to be a combination of circumstances: A C application is linked to more than one C++ library More than one version of libstdc++ was used during compilation Generally the second version of C++ used comes from a binary-only implementation of libGL The problem does not occur when linking your library with a C++ application, only with a C application The "solution" is to explicitly link your library with libstdc++ and possibly also with libGL, forcing the order of linking. After trying many combinations with my code, the only solution that I found that works is the LD_PRELOAD="libGL.so libstdc++.so.6" ruby scriptname option. That is, none of the compile-time linking solutions made any difference. My understanding of the issue is that the C++ runtime is not being properly initialized. By forcing the order of linking you bootstrap the initialization process and it works. The problem occurs only with C applications calling C++ libraries because the C application is not itself linking to libstdc++ and is not initializing the C++ runtime. Because using SWIG (or boost::python) is a common way of calling a C++ library from a C application, that is why SWIG often comes up when researching the problem. Is anyone out there able to give more insight into this problem? Is there an actual solution or do only workarounds exist? Thanks.

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • Sending series of images to display like a movie on iPhone

    - by unknownthreat
    Allow me to elaborate more. On the server, we will have a program that will take data from iPhone and process that data and produce series of images. Each time an image is generated, it will be send back to display on iPhone. I have done all of the things above using UDP, OpenGL, and such. It works. The images are transferred to iPhone and can be displayed, but it is slow. The image's resolution is around 320 x 420 and we send the image pixels by pixels. This naive implementation leads to a slow framerate. I can see around 2-3 frames per second. There are also some UDP packets dropped, and this is expected. Are there any sort of compression method available for something like this? Are there any other method that can make this better? NOTE: please don't just write "compression" as an answer, because we are aware that we will need to do it in some ways.

    Read the article

  • Subclassing UIButton but can't access my properties

    - by Ross Ellerington
    Hi, I've created a sub class of UIButton: // // DetailButton.h #import <Foundation/Foundation.h> #import <MapKit/MapKit.h> @interface MyDetailButton : UIButton { NSObject *annotation; } @property (nonatomic, retain) NSObject *annotation; @end // // DetailButton.m // #import "MyDetailButton.h" @implementation MyDetailButton @synthesize annotation; @end I figured that I can then create this object and set the annotation object by doing the following: MyDetailButton* rightButton = [MyDetailButton buttonWithType:UIButtonTypeDetailDisclosure]; rightButton.annotation = localAnnotation; localAnnotation is an NSObject but it is really an MKAnnotation. I can't see why this doesn't work but at runtime I get this error: 2010-05-27 10:37:29.214 DonorMapProto1[5241:207] *** -[UIButton annotation]: unrecognized selector sent to instance 0x445a190 2010-05-27 10:37:29.215 DonorMapProto1[5241:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UIButton annotation]: unrecognized selector sent to instance 0x445a190' ' I can't see why it's even looking at UIButton because I've subclassed that so it should be looking at the MyDetailButton class to set that annotation property. Have I missed something really obvious. It feels like it :) Thanks in advance for any help you can provide Ross

    Read the article

  • Demangling typeclass functions in GHC profiler output

    - by Paul Kuliniewicz
    When profiling a Haskell program written in GHC, the names of typeclass functions are mangled in the .prof file to distinguish one instance's implementations of them from another. How can I demangle these names to find out which type's instance it is? For example, suppose I have the following program, where types Fast and Slow both implement Show: import Data.List (foldl') sum' = foldl' (+) 0 data Fast = Fast instance Show Fast where show _ = show $ sum' [1 .. 10] data Slow = Slow instance Show Slow where show _ = show $ sum' [1 .. 100000000] main = putStrLn (show Fast ++ show Slow) I compile with -prof -auto-all -caf-all and run with +RTS -p. In the .prof file that gets generated, I see that the top cost centers are: COST CENTRE MODULE %time %alloc show_an9 Main 71.0 83.3 sum' Main 29.0 16.7 And in the tree, I likewise see (omitting irrelevant lines): individual inherited COST CENTRE MODULE no. entries %time %alloc %time %alloc main Main 232 1 0.0 0.0 100.0 100.0 show_an9 Main 235 1 71.0 83.3 100.0 100.0 sum' Main 236 0 29.0 16.7 29.0 16.7 show_anx Main 233 1 0.0 0.0 0.0 0.0 How do I figure out that show_an9 is Slow's implementation of show and not Fast's?

    Read the article

  • Calculate an Internet (aka IP, aka RFC791) checksum in C#

    - by Pat
    Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share? Remember, the internet protocol specifies that: "The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero." More explanation can be found from Dr. Math. There are some efficiency pointers available, but that's not really a large concern for me at this point. Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-) Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests. [TestMethod()] public void InternetChecksum_SimplestValidValue_ShouldMatch() { IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros ushort expected = 0xFFFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch() { IEnumerable<byte> value = new byte[]{0xFF}; ushort expected = 0xFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch() { IEnumerable<byte> value = new byte[] { 0x00, 0xFF }; ushort expected = 0xFF00; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); }

    Read the article

  • cocoa hello world screensaver

    - by RW
    I have been studying NSView and as such I thought I would give a shot at a screen saver. I have been able to display and image in an NSView but I can't seen to modify this example code to display a simple picture in ScreenSaverView. http://www.mactech.com/articles/mactech/Vol.20/20.06/ScreenSaversInCocoa/ BTW great tutorial that works with Snow Leopard. I would think to simply display an image I would need something that looked like this... What am I doing wrong? // // try_screensaverView.m // try screensaver // #import "try_screensaverView.h" @implementation try_screensaverView - (id)initWithFrame:(NSRect)frame isPreview:(BOOL)isPreview { self = [super initWithFrame:frame isPreview:isPreview]; if (self) { [self setAnimationTimeInterval:1]; //refresh once per sec } return self; } - (void)startAnimation { [super startAnimation]; NSString *path = [[NSBundle mainBundle] pathForResource:@"leaf" ofType:@"JPG" inDirectory:@""]; image = [[NSImage alloc] initWithContentsOfFile:path]; } - (void)stopAnimation { [super stopAnimation]; } - (void)drawRect:(NSRect)rect { [super drawRect:rect]; } - (void)animateOneFrame { ////////////////////////////////////////////////////////// //load image and display This does not scale the image NSRect bounds = [self bounds]; NSSize newSize; newSize.width = bounds.size.width; newSize.height = bounds.size.height; [image setSize:newSize]; NSRect imageRect; imageRect.origin = NSZeroPoint; imageRect.size = [image size]; NSRect drawingRect = imageRect; [image drawInRect:drawingRect fromRect:imageRect operation:NSCompositeSourceOver fraction:1]; } - (BOOL)hasConfigureSheet { return NO; } - (NSWindow*)configureSheet { return nil; } @end

    Read the article

  • Why won't my CATiledLayer scroll in a UIScrollView after zooming?

    - by Brodie4598
    I'm currently having the following problem with CATiledLayer: when the view first loads, the scrolling works perfectly, but then when you zoom once, the view snaps to the anchor point (top left corner) and it can no longer scroll at all. The zooming works in that it will zoom in and out, but it will only zoom to the top left corner. My code is as follows: #import <QuartzCore/QuartzCore.h> #import "PracticeViewController.h" @implementation practiceViewController //@synthesize image; - (void)viewDidLoad { NSString *path = [[NSBundle mainBundle] pathForResource:@"H-5" ofType:@"jpg"]; NSData *data = [NSData dataWithContentsOfFile:path]; image = [UIImage imageWithData:data]; CGRect pageRect = CGRectMake(0, 0, image.size.width, image.size.height); CATiledLayer *tiledLayer = [CATiledLayer layer]; tiledLayer.anchorPoint = CGPointMake(0.0f, 1.0f); tiledLayer.delegate = self; tiledLayer.tileSize = CGSizeMake(1000, 1000); tiledLayer.levelsOfDetail = 6; tiledLayer.levelsOfDetailBias = 0; tiledLayer.bounds = pageRect; tiledLayer.transform = CATransform3DMakeScale(1.0f, -1.0f, 0.3f); myContentView = [[UIView alloc] initWithFrame:self.view.bounds]; [myContentView.layer addSublayer:tiledLayer]; UIScrollView *scrollView = [[UIScrollView alloc] initWithFrame:self.view.bounds]; scrollView.delegate = self; scrollView.contentSize = pageRect.size; scrollView.minimumZoomScale = .2; scrollView.maximumZoomScale = 1; [scrollView addSubview:myContentView]; [self.view addSubview:scrollView]; } - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { return myContentView; } - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx { NSString *path = [[NSBundle mainBundle] pathForResource:@"H-5" ofType:@"jpg"]; NSData *data = [NSData dataWithContentsOfFile:path]; image = [UIImage imageWithData:data]; CGRect imageRect = CGRectMake (0.0, 0.0, image.size.width, image.size.height); CGContextDrawImage (ctx, imageRect, [image CGImage]); } @end

    Read the article

  • Polymorphic Queue

    - by metdos
    Hello Everyone, I'm trying to implement a Polymorphic Queue. Here is my trial: QQueue <Request *> requests; while(...) { QString line = QString::fromUtf8(client->readLine()).trimmed(); if(...)){ Request *request=new Request(); request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); //this initialize variables in request using tcpMessage if(request->requestType==REQUEST_LOGIN){ LoginRequest loginRequest; request=&loginRequest; request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); requests.enqueue(request); } //Here pointers in "requests" do not point to objects I created above, and I noticed that their destructors are also called. LoginRequest *loginRequest2=dynamic_cast<LoginRequest *>(requests.dequeue()); loginRequest2->decodeFromTcpMessage(); } } Unfortunately, I could not manage to make work Polymorphic Queue with this code because of the reason I mentioned in second comment.I guess, I need to use smart-pointers, but how? I'm open to any improvement of my code or a new implementation of polymorphic queue. Thanks.

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • jQuery Validation plugin results in empty AJAX POST

    - by user305412
    Hi, I have tried to solve this issue for at couple of days with no luck. I have a form, where I use the validation plugin, and when I try to submit it, it submits empty vars. Now, if I remove the validation, everything works fine. Here is my implementation: $(document).ready(function(){ $("#myForm").validate({ rules: { field1: { required: true, minlength: 5 }, field2: { required: true, minlength: 10 } }, messages: { field1: "this is required", field2: "this is required", }, errorLabelContainer: $("#validate_msg"), invalidHandler: function(e, validator) { var errors = validator.numberOfInvalids(); if (errors) { $("#validate_div").show(); } }, onkeyup: false, success: false, submitHandler: function(form) { doAjaxPost(form); } }); }); function doAjaxPost(form) { // do some stuff $(form).ajaxSubmit(); return false; } As I wrote this does not work when I have my validation, BUT if I remove that, and just add an onsubmit="doAjaxPost(this.form); return false"; to my HTML form, it works - any clue??? Now here is the funny thing, everything works as it should in Safari, but not in firefox 3.6.2 (Mac OS X)!

    Read the article

  • Suggestions of the easiest algorithms for some Graph operations

    - by Nazgulled
    Hi, The deadline for this project is closing in very quickly and I don't have much time to deal with what it's left. So, instead of looking for the best (and probably more complicated/time consuming) algorithms, I'm looking for the easiest algorithms to implement a few operations on a Graph structure. The operations I'll need to do is as follows: List all users in the graph network given a distance X List all users in the graph network given a distance X and the type of relation Calculate the shortest path between 2 users on the graph network given a type of relation Calculate the maximum distance between 2 users on the graph network Calculate the most distant connected users on the graph network A few notes about my Graph implementation: The edge node has 2 properties, one is of type char and another int. They represent the type of relation and weight, respectively. The Graph is implemented with linked lists, for both the vertices and edges. I mean, each vertex points to the next one and each vertex also points to the head of a different linked list, the edges for that specific vertex. What I know about what I need to do: I don't know if this is the easiest as I said above, but for the shortest path between 2 users, I believe the Dijkstra algorithm is what people seem to recommend pretty often so I think I'm going with that. I've been searching and searching and I'm finding it hard to implement this algorithm, does anyone know of any tutorial or something easy to understand so I can implement this algorithm myself? If possible, with C source code examples, it would help a lot. I see many examples with math notations but that just confuses me even more. Do you think it would help if I "converted" the graph to an adjacency matrix to represent the links weight and relation type? Would it be easier to perform the algorithm on that instead of the linked lists? I could easily implement a function to do that conversion when needed. I'm saying this because I got the feeling it would be easier after reading a couple of pages about the subject, but I could be wrong. I don't have any ideas about the other 4 operations, suggestions?

    Read the article

  • Problem with TTWebController… alternative view controller template for UIWebView?

    - by prendio2
    I have a UIWebView with content populated from a Last.fm API call. This content contains links, many of which are handled by parsing info from the URL in: - (BOOL)webView:(UIWebView *)aWbView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType; …and passing that info on to appropraite view controllers. When I cannot handle a URL I would like to push a new view controller on the stack and load the webpage into a UIWebView there. The TTWebController in Three20 looks very promising since it has also implemented a navigation toolbar, loading indicators etc. Somewhat naively perhaps I thought I would be able to use this controller to display web content in my app, without implementing Three20 throughout the app. I followed the instructions on github to add Three20 to my project and added the following code to deal with URLs in shouldStartLoadWithRequest: TTWebController* browser = [[TTWebController alloc]init]; [browser openURL:@"http://three20.info"]; //initially test with this URL [self.navigationController pushViewController:navigator animated:YES]; This crashes my app however with the following notice: *** -[TTNavigator setParentViewController:]: unrecognized selector sent to instance 0x3d5db70 What else do I need to do to implement the TTWebController in my app? Or do you know of an alternative view controller template that I can use instead of the Three20 implementation?

    Read the article

  • How to provide js-ctypes in a spidermonkey embedding?

    - by Triston J. Taylor
    Summary I have looked over the code the SpiderMonkey 'shell' application uses to create the ctypes JavaScript object, but I'm a less-than novice C programmer. Due to the varying levels of insanity emitted by modern build systems, I can't seem to track down the code or command that actually links a program with the desired functionality. method.madness This js-ctypes implementation by The Mozilla Devs is an awesome addition. Since its conception, scripting has been primarily used to exert control over more rigorous and robust applications. The advent of js-ctypes to the SpiderMonkey project, enables JavaScript to stand up and be counted as a full fledged object oriented rapid application development language flying high above 'the bar' set by various venerable application development languages such as Microsoft's VB6. Shall we begin? I built SpiderMonkey with this config: ./configure --enable-ctypes --with-system-nspr followed by successful execution of: make && make install The js shell works fine and a global ctypes javascript object was verified operational in that shell. Working with code taken from the first source listing at How to embed the JavaScript Engine -MDN, I made an attempt to instantiate the JavaScript ctypes object by inserting the following code at line 66: /* Populate the global object with the ctypes object. */ if (!JS_InitCTypesClass(cx, global)) return NULL; /* I compiled with: g++ $(./js-config --cflags --libs) hello.cpp -o hello It compiles with a few warnings: hello.cpp: In function ‘int main(int, const char**)’: hello.cpp:69:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null] hello.cpp:80:20: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings] hello.cpp:89:17: warning: NULL used in arithmetic [-Wpointer-arith] But when you run the application: ./hello: symbol lookup error: ./hello: undefined symbol: JS_InitCTypesClass Moreover JS_InitCTypesClass is declared extern in 'dist/include/jsapi.h', but the function resides in 'ctypes/CTypes.cpp' which includes its own header 'CTypes.h' and is compiled at some point by some command during 'make' to yeild './CTypes.o'. As I stated earlier, I am less than a novice with the C code, and I really have no idea what to do here. Please give or give direction to a generic example of making the js-ctypes object functional in an embedding.

    Read the article

  • PayPal integration woes: PDT hangs on return to site

    - by Tom
    Hi, I'm implementing PayPal IPN & PDT. After some headache & time at the sandbox, IPN is working well and PDT returns the correct $_GET data. The implementation is as follows: Pass user ID in form to PayPal User buys product and triggers IPN which updates database for given user ID PDT returns transaction ID when user returns to site The return page says "please wait" and repeat-Ajax-checks for the transaction status User is redirected to success/failure page Everything works well, EXCEPT that when using the PayPal ready PHP code for PDT to do a return POST, the page hangs. PayPal waits for a response and the user never gets back to my site. I'm not getting a fail status, just nothing. The funny thing is that once the unknown error occurs, my test domain becomes unresponsive for a short period. The code (PHP): https://www.paypal.com/us/cgi-bin/webscr?cmd=p/xcl/rec/pdt-code-outside If I comment out the POST back, it all works fine. I'm able to pin down the problem to once the code enters the while{} loop. Unfortunately, I'm not experienced enough to write a replacement from scratch for the PayPal code, so would really appreciate any ideas on what might be wrong. The POST back goes to ssl://www.sandbox.paypal.com, and I'm using button code and an authorisation token that have all been created via a sandbox test account. Thanks in advance.

    Read the article

  • UITextField to show UIPickerView in xcode 4.3.2 using textField.inputView

    - by ChromeBlue
    I'm looking to have a UITextField display a UIPickerView when touched. I found quite a few questions on the same topic that say to use UITextField.inputView. However, when I try, nothing changes. I have a very simple code: setupViewController.h #import <UIKit/UIKit.h> extern NSString *setupRightChannel; @interface setupViewController : UIViewController { UITextField* rightChannelField; UIPickerView *channelPicker; } @property (retain, nonatomic) IBOutlet UITextField* rightChannelField; @property (retain, readwrite) IBOutlet UIPickerView *channelPicker; setupViewController.m #import "setupViewController.h" @Interface setupViewController() @end @implementation setupViewController @synthesize rightChannelField; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; self.navigationItem.title = @"Setup"; // Do any additional setup after loading the view from its nib. rightChannelField.inputView = channelPicker; } -(IBAction)okClicked:(id)sender { setupRightChannel = rightChannelField.text; } I feel like I might be missing something fundamental here, but I honestly cannot think of what it is.

    Read the article

  • Listing serial (COM) ports on Windows?

    - by Eli Bendersky
    Hello, I'm looking for a robust way to list the available serial (COM) ports on a Windows machine. There's this post about using WMI, but I would like something less .NET specific - I want to get the list of ports in a Python or a C++ program, without .NET. I currently know of two other approaches: Reading the information in the HARDWARE\\DEVICEMAP\\SERIALCOMM registry key. This looks like a great option, but is it robust? I can't find a guarantee online or in MSDN that this registry cell indeed always holds the full list of available ports. Tryint to call CreateFile on COMN with N a number from 1 to something. This isn't good enough, because some COM ports aren't named COMN. For example, some virtual COM ports created are named CSNA0, CSNB0, and so on, so I wouldn't rely on this method. Any other methods/ideas/experience to share? Edit: by the way, here's a simple Python implementation of reading the port names from registry: import _winreg as winreg import itertools def enumerate_serial_ports(): """ Uses the Win32 registry to return a iterator of serial (COM) ports existing on this computer. """ path = 'HARDWARE\\DEVICEMAP\\SERIALCOMM' try: key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, path) except WindowsError: raise IterationError for i in itertools.count(): try: val = winreg.EnumValue(key, i) yield (str(val[1]), str(val[0])) except EnvironmentError: break

    Read the article

  • Delphi Unicode String Type Stored Directly at its Address (or "Unicode ShortString")

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer (or a single byte would be enough, actually) containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). Update: The most important reason for this is probably that it would be very problematic if sizeof(string) were variable. I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right? Update I will, among other things, need to use these strings in packed records. I also need manually to read/write these strings to files/the heap. I could live with fixed-size strings, such as <= 128 characters, and I could redesign the problem so it will work with null-terminated strings. But PChar will not work, for sizeof(PChar) = 1 - it's merely an address. The approach I eventually settled for was to use a static array of bytes. I will post my implementation as a solution later today.

    Read the article

  • C++ find largest BST in a binary tree

    - by fonjibe
    what is your approach to have the largest BST in a binary tree? I refer to this post where a very good implementation for finding if a tree is BST or not is bool isBinarySearchTree(BinaryTree * n, int min=std::numeric_limits<int>::min(), int max=std::numeric_limits<int>::max()) { return !n || (min < n->value && n->value < max && isBinarySearchTree(n->l, min, n->value) && isBinarySearchTree(n->r, n->value, max)); } It is quite easy to implement a solution to find whether a tree contains a binary search tree. i think that the following method makes it: bool includeSomeBST(BinaryTree* n) { if(!isBinarySearchTree(n)) { if(!isBinarySearchTree(n->left)) return isBinarySearchTree(n->right); } else return true; else return true; } but what if i want the largest BST? this is my first idea, BinaryTree largestBST(BinaryTree* n) { if(isBinarySearchTree(n)) return n; if(!isBinarySearchTree(n->left)) { if(!isBinarySearchTree(n->right)) if(includeSomeBST(n->right)) return largestBST(n->right); else if(includeSomeBST(n->left)) return largestBST(n->left); else return NULL; else return n->right; } else return n->left; } but its not telling the largest actually. i struggle to make the comparison. how should it take place? thanks

    Read the article

  • COM-Objects containing maps / content error(0)

    - by Florian Berzsenyi
    I'm writing a small wrapper to get familiar with some important topics in C++ (WinAPI, COM, STL,OOP). For now, my class shall be able to create a (child) window. Mainly, this window is connected to a global message loop that distribute messages to a local loop of the right instance (global is static, local is virtual). Obviously, there are surely better ways to do that but I'm using std::maps to store HWND and their instance pointer in pairs (the Global loop looks for the pointer with the HWND-parameter, gets itself the pointer from the map and calls then the local loop). Now, it appears that the map does not accept any values because of a unknown reason. It seems to allocate enough space but something went wrong anyway [ (error) 0 is displayed instead of the entries in visual C++). I've looked that up in google as well and found out that maps cause some trouble when used in classes AND DLLs. May this be the reason and is there any solution?? Protected scope of class: static std::map<HWND,MAP_BASE_OBJECT*> m_LoopBuf Implementation in .cpp-file: std::map<HWND,MAP_BASE_OBJECT*> HWindow::m_LoopBuf;

    Read the article

  • Best way to split a string by word (SQL Batch separator)

    - by Paul Kohler
    I have a class I use to "split" a string of SQL commands by a batch separator - e.g. "GO" - into a list of SQL commands that are run in turn etc. ... private static IEnumerable<string> SplitByBatchIndecator(string script, string batchIndicator) { string pattern = string.Concat("^\\s*", batchIndicator, "\\s*$"); RegexOptions options = RegexOptions.Compiled | RegexOptions.IgnoreCase | RegexOptions.Multiline; foreach (string batch in Regex.Split(script, pattern, options)) { yield return batch.Trim(); } } My current implementation uses a Regex with yield but I am not sure if it's the "best" way. It should be quick It should handle large strings (I have some scripts that are 10mb in size for example) The hardest part (that the above code currently does not do) is to take quoted text into account Currently the following SQL will incorrectly get split: var batch = QueryBatch.Parse(@"-- issue... insert into table (name, desc) values('foo', 'if the go is on a line by itself we have a problem...')"); Assert.That(batch.Queries.Count, Is.EqualTo(1), "This fails for now..."); I have thought about a token based parser that tracks the state of the open closed quotes but am not sure if Regex will do it. Any ideas!?

    Read the article

  • cookie name is truncated in Servlet 3.0 HttpServletRequest (Glassfish V3)

    - by idplmal
    I'm porting our Web authentication/authorization middleware for use in containers implementing the new servlet 3.0 API (Glassfish V3 in this case). The middleware pulls cookies from the HttpServletRequest filtering on cookies with names of the form "DACS:FEDERATION::JURISDICTION:username". This works fine in the version 2.5 servlet API but is broken in 3.0. The cookie names in 3.0 are being truncated at the first ":" in the name. I understand that the servlet 3.0 implementation defaults to RFC 2109 cookies which are more restrictive about cookie names than the old Netscape spec (":" is among the characters not allowed in RFC 2109 cookie names). Digging into the servlet 3.0 source code, it appears that the use of RFC2109 names can be disabled by setting a System property "org.glassfish.web.rfc2109.cookie_names_enforced" to false. I've tried this to no avail. But besides that, the code that uses checks cookie names is in the constructor for Cookie, and it would appear that the truncation is occurring elsewhere. So - finally - the question. Have others bumped into such issues in the servlet 3.0 API and have you found a work around?

    Read the article

  • Advice on a DB that can be uploaded to a website by a smart client for collecting survey feedback

    - by absfabs
    Hello, I'm hoping you can help. I'm looking for a zero config multi-user datbase that my winforms application can easily upload to a webserver folder (together with 1 or 2 classic asp pages) and am looking for some suggestions/recommendations. The idea is that the database will be used to collect feedback entered by people filling in the asp pages. The pages will write to the database using javascript. The database will subsequently be downloaded again for processing once the responses are in. In Summary: It will mostly run in MS Windows environments. I have a modest budget for this and do not mind paying for such a database. No runtime licensing costs. Should be xcopy - Once uploaded to a website folder it should be operational. It should not have a dotnet CLR dependency. It should support a resonable level of concurrent access. Average respondent count would be around 20-30 but one never knows. Should be a reasonable size so that uploads/downloads to and from the site will be reasonably fast. Would appreciate your suggestions/comments Many thanks Abz To clarify - this is a desktop commercial application for feedback management in a vertical market. It uses SQL Server as the backing store. The application currently provides feedback management from email and paper feedback. I now want to add web feedback capability. Getting users to to make their SQL servers accessible to a website is not at option at this time as I am want to make getting up and running as painless as possible. I intend to release a web based implementation of the software in the near future but for now am looking at the above as a pragmatic way to provide web based feedback collection.

    Read the article

  • Why is there no autorelease pool when I do performSelectorInBackground: ?

    - by Thanks
    I am calling a method that goes in a background thread: [self performSelectorInBackground:@selector(loadViewControllerWithIndex:) withObject:[NSNumber numberWithInt:viewControllerIndex]]; then, I have this method implementation that gets called by the selector: - (void) loadViewControllerWithIndex:(NSNumber *)indexNumberObj { NSAutoreleasePool *arPool = [[NSAutoreleasePool alloc] init]; NSInteger vcIndex = [indexNumberObj intValue]; Class c; UIViewController *controller = [viewControllers objectAtIndex:vcIndex]; switch (vcIndex) { case 0: c = [MyFirstViewController class]; break; case 1: c = [MySecondViewController class]; break; default: NSLog(@"unknown index for loading view controller: %d", vcIndex); // error break; } if ((NSNull *)controller == [NSNull null]) { controller = [[c alloc] initWithNib]; [viewControllers replaceObjectAtIndex:vcIndex withObject:controller]; [controller release]; } if (controller.view.superview == nil) { UIView *placeholderView = [viewControllerPlaceholderViews objectAtIndex:vcIndex]; [placeholderView addSubview:controller.view]; } [arPool release]; } Althoug I do create an autorelease pool there for that thread, I always get this error: 2009-05-30 12:03:09.910 Demo[1827:3f03] *** _NSAutoreleaseNoPool(): Object 0x523e50 of class NSCFNumber autoreleased with no pool in place - just leaking Stack: (0x95c83f0f 0x95b90442 0x28d3 0x2d42 0x95b96e0d 0x95b969b4 0x93a00155 0x93a00012) If I take away the autorelease pool, I get a whole bunch of messages like these. I also tried to create an autorelease pool around the call of the performSelectorInBackground:, but that doesn't help. I suspect the parameter, but I don't know why the compiler complains about an NSCFNumber. Am I missing something? My Instance variables are all "nonatomic". Can that be a problem? UPDATE: I may also suspect that some variable has been added to an autorelease pool of the main thread (maybe an ivar), and now it trys to release that one inside the wrong autorelease pool? If so, how could I fix that? (damn, this threading stuff is complex ;) )

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >