Search Results

Search found 9353 results on 375 pages for 'implementation phase'.

Page 289/375 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Use Objective-C without NSObject?

    - by Alex I
    I am testing some simple Objective-C code on Windows (cygwin, gcc). This code already works in Xcode on Mac. I would like to convert my objects to not subclass NSObject (or anything else, lol). Is this possible, and how? What I have so far: // MyObject.h @interface MyObject - (void)myMethod:(int) param; @end and // MyObject.m #include "MyObject.h" @interface MyObject() { // this line is a syntax error, why? int _field; } @end @implementation MyObject - (id)init { // what goes in here? return self; } - (void)myMethod:(int) param { _field = param; } @end What happens when I try compiling it: gcc -o test MyObject.m -lobjc MyObject.m:4:1: error: expected identifier or ‘(’ before ‘{’ token MyObject.m: In function ‘-[MyObject myMethod:]’: MyObject.m:17:3: error: ‘_field’ undeclared (first use in this function) EDIT My compiler is cygwin's gcc, also has cygwin gcc-objc package: gcc --version gcc (GCC) 4.7.3 I have tried looking for this online and in a couple of Objective-C tutorials, but every example of a class I have found inherits from NSObject. Is it really impossible to write Objective-C without Cocoa or some kind of Cocoa replacement that provides NSObject? (Yes, I know about GNUstep. I would really rather avoid that if possible...)

    Read the article

  • NSDate out of scope

    - by therealtkd
    Having problems with out of scope for NSDate in an iphone app. I have an interface defined like this: @interface MyObject : NSoObject { NSMutableArray *array; BOOL checkThis; NSDate *nextDue; } Now in the implementation I have this: -(id) init { if( (self=[super init]) ) { checkThis = NO; array = [[NSMutableArray alloc] init]; nextDue = [[NSDate date] retain]; NSDate *testDate = [NSDate date]; } return self; } Now, if I trace through the init, before I actually assign the variables checkThis shows as boolean. array shows as pointer 0x0 because it hasn't ben assigned. But the nextDue is showing as 'out of scope'. I don't understand why this is out of scope but the other variables aren't. If I trace through the code until after the variables are assigned, array now shows as being correctly assigned but nextDue is still out of scope. Interestingly, the testDate variable is assigned just fine and the debugger shows this as a valid date. Further interesting point is if I move the mouse over the testDate variable while I am debugging, it shows as an 'NSDate *' type which I would expect since that's its definition. Yet the nextDue, which to me is defined the same way is showing as a '_NSCFDate *'. Any googling I did on the subject said that the retain is the problem, but its actually out of scope before I even try to assign the variable. However, in another class, the same definition for NSDate work ok. It shows as nil before a value is assigned to it. Arghhh

    Read the article

  • Exception Handling in ASP.NET MVC and Ajax - [HandleException] filter

    - by Graham
    All, I'm learning MVC and using it for a business app (MVC 1.0). I'm really struggling to get my head around exception handling. I've spent a lot of time on the web but not found anything along the lines of what I'm after. We currently use a filter attribute that implements IExceptionFilter. We decorate a base controller class with this so all server side exceptions are nicely routed to an exception page that displays the error and performs logging. I've started to use AJAX calls that return JSON data but when the server side implementation throws an error, the filter is fired but the page does not redirect to the Error page - it just stays on the page that called the AJAX method. Is there any way to force the redirect on the server (e.g. a ASP.NET Server.Transfer or redirect?) I've read that I must return a JSON object (wrapping the .NET Exception) and then redirect on the client, but then I can't guarantee the client will redirect... but then (although I'm probably doing something wrong) the server attempts to redirect but then gets an unauthorised exception (the base controller is secured but the Exception controller is not as it does not inherit from this) Has anybody please got a simple example (.NET and jQuery code). I feel like I'm randomly trying things in the hope it will work Exception Filter so far... public class HandleExceptionAttribute : FilterAttribute, IExceptionFilter { #region IExceptionFilter Members public void OnException(ExceptionContext filterContext) { if (filterContext.ExceptionHandled) { return; } filterContext.Controller.TempData[CommonLookup.ExceptionObject] = filterContext.Exception; if (filterContext.HttpContext.Request.IsAjaxRequest()) { filterContext.Result = AjaxException(filterContext.Exception.Message, filterContext); } else { //Redirect to global handler filterContext.Result = new RedirectToRouteResult(new RouteValueDictionary(new { controller = AvailableControllers.Exception, action = AvailableActions.HandleException })); filterContext.ExceptionHandled = true; filterContext.HttpContext.Response.Clear(); } } #endregion private JsonResult AjaxException(string message, ExceptionContext filterContext) { if (string.IsNullOrEmpty(message)) { message = "Server error"; //TODO: Replace with better message } filterContext.HttpContext.Response.StatusCode = (int)HttpStatusCode.InternalServerError; filterContext.HttpContext.Response.TrySkipIisCustomErrors = true; //Needed for IIS7.0 return new JsonResult { Data = new { ErrorMessage = message }, ContentEncoding = Encoding.UTF8, }; } }

    Read the article

  • Generic object to object mapping with parametrized constructor

    - by Rody van Sambeek
    I have a data access layer which returns an IDataRecord. I have a WCF service that serves DataContracts (dto's). These DataContracts are initiated by a parametrized constructor containing the IDataRecord as follows: [DataContract] public class DataContractItem { [DataMember] public int ID; [DataMember] public string Title; public DataContractItem(IDataRecord record) { this.ID = Convert.ToInt32(record["ID"]); this.Title = record["title"].ToString(); } } Unfortanately I can't change the DAL, so I'm obliged to work with the IDataRecord as input. But in generat this works very well. The mappings are pretty simple most of the time, sometimes they are a bit more complex, but no rocket science. However, now I'd like to be able to use generics to instantiate the different DataContracts to simplify the WCF service methods. I want to be able to do something like: public T DoSomething<T>(IDataRecord record) { ... return new T(record); } So I'd tried to following solutions: Use a generic typed interface with a constructor. doesn't work: ofcourse we can't define a constructor in an interface Use a static method to instantiate the DataContract and create a typed interface containing this static method. doesn't work: ofcourse we can't define a static method in an interface Use a generic typed interface containing the new() constraint doesn't work: new() constraint cannot contain a parameter (the IDataRecord) Using a factory object to perform the mapping based on the DataContract Type. does work, but: not very clean, because I now have a switch statement with all mappings in one file. I can't find a real clean solution for this. Can somebody shed a light on this for me? The project is too small for any complex mapping techniques and too large for a "switch-based" factory implementation.

    Read the article

  • tabBarController and navigationControllers in landscape mode, episode II

    - by unforgiven
    I have a UITabBarController, and each tab handles a different UIViewController that pushes on the stack new controllers as needed. In two of these tabs I need, when a specific controller is reached, the ability to rotate the iPhone and visualize a view in landscape mode. After struggling a lot I have found that it is mandatory subclassing UITabBarController to override shouldAutorotateToInterfaceOrientation. However, if i simply return YES in the implementation, the following undesirable side effect arises: every controller in every tab is automatically put in landscape mode when rotating the iPhone. Even overriding shouldAutorotateToInterfaceOrientation in each controller to return NO does not work: when the iPhone is rotated, the controller is put in landscape mode. I implemented shouldAutorotateToInterfaceOrientation as follows in the subclassed UITabBarController: (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if([self selectedIndex] == 0 || [self selectedIndex] == 3) return YES; return NO; } So that only the two tabs I am interested in actually get support for landscape mode. Is there a way to support landscape mode for a specific controller on the stack of a particular tab? I tried, without success, something like (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if([self selectedIndex] == 0 || [self selectedIndex] == 3) { if ([[self selectedViewController] isKindOfClass: [landscapeModeViewController class]]) return YES; } return NO; } Also, I tried using the delegate method didSelectViewController, without success. Any help is greatly appreciated. Thank you.

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • Quickly determine if a number is prime in Python for numbers < 1 billion

    - by Frór
    Hi, My current algorithm to check the primality of numbers in python is way to slow for numbers between 10 million and 1 billion. I want it to be improved knowing that I will never get numbers bigger than 1 billion. The context is that I can't get an implementation that is quick enough for solving problem 60 of project Euler: I'm getting the answer to the problem in 75 seconds where I need it in 60 seconds. http://projecteuler.net/index.php?section=problems&id=60 I have very few memory at my disposal so I can't store all the prime numbers below 1 billion. I'm currently using the standard trial division tuned with 6k±1. Is there anything better than this? Do I already need to get the Rabin-Miller method for numbers that are this large. primes_under_100 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def isprime(n): if n <= 100: return n in primes_under_100 if n % 2 == 0 or n % 3 == 0: return False for f in range(5, int(n ** .5), 6): if n % f == 0 or n % (f + 2) == 0: return False return True How can I improve this algorithm?

    Read the article

  • Why I got a "sent to freed object error"?

    - by Tattat
    I have a Table View, and CharTableController, the CharTableController works like this: .h: #import <Foundation/Foundation.h> @interface CharTableController : UITableViewController <UITableViewDelegate, UITableViewDataSource>{ // IBOutlet UILabel *debugLabel; NSArray *listData; } //@property (nonatomic, retain) IBOutlet UILabel *debugLabel; @property (nonatomic, retain) NSArray *listData; @end The .m: #import "CharTableController.h" @implementation CharTableController @synthesize listData; - (void)viewDidLoad { NSArray *array = [[NSArray alloc] initWithObjects:@"Sleepy", @"Sneezy", @"Bashful", @"Happy", @"Doc", @"Grumpy", @"Dopey", @"Thorin", @"Dorin", @"Nori", @"Ori", @"Balin", @"Dwalin", @"Fili", @"Kili", @"Oin", @"Gloin", @"Bifur", @"Bofur", @"Bombur", nil]; self.listData = array; [array release]; [super viewDidLoad]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [self.listData count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *SimpleTableIdentifier = @"SimpleTableIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier: SimpleTableIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:SimpleTableIdentifier] autorelease]; NSUInteger row = [indexPath row]; cell.textLabel.text = [listData objectAtIndex:row]; } return cell; } @end And I Use the IB to link the TableView's dataSource and delegate to the CharTableController. In the CharTableController's view is the TableView in IB obviously. Reference Object in dataSource TableView and delegate TableView. What's wrong with my setting? thz.

    Read the article

  • How do I edit a row in NSTableView to allow deleting the data in that row and replacing with new d

    - by lampShade
    I'm building a to-do-list application and I want to be able to edit the entries in the table and replace them with new entries. I'm close to being able to do what I want but not quit. Here is my code so far: /* IBOutlet NSTextField *textField; IBOutlet NSTabView *tableView; IBOutlet NSButton *button; NSMutableArray *myArray; */ #import "AppController.h" @implementation AppController -(IBAction)addNewItem:(id)sender { [myArray addObject:[textField stringValue]]; [tableView reloadData]; } - (int)numberOfRowsInTableView:(NSTableView *)aTableView { return [myArray count]; } - (id)tableView:(NSTableView *)aTableView objectValueForTableColumn:(NSTableColumn *)aTableColumn row:(int)rowIndex { return [myArray objectAtIndex:rowIndex]; } - (id)init { [super init]; myArray = [[NSMutableArray alloc] init]; return self; } -(IBAction)removeItem:(id)sender { NSLog(@"This is the index of the selected row: %d",[tableView selectedRow]); NSLog(@"the clicked row is %d",[tableView clickedRow]); [myArray replaceObjectAtIndex:[tableView selectedRow] withObject:[textField stringValue]]; [myArray addObject:[textField stringValue]]; //[tableView reloadData]; } @end

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Key Tips in WPF

    - by Brad Leach
    Office 2007 and the Ribbon introduced the concept of "Key Tips". In short, every single command in the Ribbon receives a letter which you can press to activate that command. ... The letters are indicated by small "KeyTips" which indicate the letter to press to activate the control. KeyTips are displayed using the Alt key, so using them feels similar to how menu navigation works in Windows. (Source: http://blogs.msdn.com/jensenh/archive/2006/04/12/574930.aspx) An example of the Key Tips can be shown as follows. In this diagram, the use has pressed the ALT key, and is awaiting further input. Are there any WPF Open Source examples of "Key Tips"? How would you go about implementing something like this feature in a generic way (i.e. not requiring a Ribbon)? How would you implement this using a MVVM pattern (given that ICommand does not support InputBindings). Note: ActiPro have implemented this feature in their implementation of a Ribbon, but they have not released source code.

    Read the article

  • ContextMenu not popping up on Long click

    - by primal
    Hi, The context menu is not popping up on the long click on the list items in the list view. I've extended the base adapter and used a view holder to implement the custom list with textviews and an imagebutton. adapter = new MyClickableListAdapter(this, R.layout.timeline, mObjectList); list.setAdapter(adapter); registerForContextMenu(list); Implementation of onCreateContextMenu @Override public void onCreateContextMenu(ContextMenu menu, View v, ContextMenuInfo menuInfo) { // TODO Auto-generated method stub super.onCreateContextMenu(menu, v, menuInfo); Log.d(TAG, "Entering Context Menu"); menu.setHeaderTitle("Context Menu"); menu.add(Menu.NONE, DELETE_ID, Menu.NONE, "Delete") .setIcon(R.drawable.icon); } The XML for listview is here <ListView android:id="@+id/list" android:layout_width="fill_parent" android:layout_height="wrap_content" /> I've been trying this for many days. I think its impossible to register Context-menu for a custom list view like this. Correct me if I am wrong (possibly with sample code). Now I am thinking of a adding a button to the list item and it displays a menu on clicking it. Is it possible with some other way than using Dialogs? Any help would be much appreciated..

    Read the article

  • Finding a picture in a picture with java?

    - by tarrasch
    what i want to to is analyse input from screen in form of pictures. I want to be able to identify a part of an image in a bigger image and get its coordinates within the bigger picture. Example: would have to be located in And the result would be the upper right corner of the picture in the big picture and the lower left of the part in the big picture. As you can see, the white part of the picture is irrelevant, what i basically need is just the green frame. Is there a library that can do something like this for me? Runtime is not really an issue. What i want to do with this is just generating a few random pixel coordinates and recognize the color in the big picture at that position, to recognize the green box fast later. And how would it decrease performance, if the white box in the middle is transparent? The question has been asked several times on SO as it seems without a single answer. I found i found a solution at http://werner.yellowcouch.org/Papers/subimg/index.html . Unfortunately its in C++ and i do not understand a thing. Would be nice to have a Java implementation on SO.

    Read the article

  • Adding a custom subview (created in a xib) to a view controller's view - What am I doing wrong

    - by Fran
    I've created a view in a xib (with an activity indicator, a progress view and a label). Then I've created .h/.m files: #import <UIKit/UIKit.h> @interface MyCustomView : UIView { IBOutlet UIActivityIndicatorView *actIndicator; IBOutlet UIProgressView *progressBar; IBOutlet UILabel *statusMsg; } @end #import "MyCustomView.h" @implementation MyCustomView - (id)initWithFrame:(CGRect)frame { if ((self = [super initWithFrame:frame])) { // Initialization code } return self; } - (void)dealloc { [super dealloc]; } @end In IB, I set the file's owner and view identity to MyCustomView and connect the IBOutlet to the File's owner In MyViewController.m, I've: - (void)viewDidLoad { [super viewDidLoad]; UIView *subView = [[MyCustomView alloc] initWithFrame:myTableView.frame]; [subView setBackgroundColor:[UIColor colorWithRed:0.0 green:0.0 blue:0.0 alpha:0.5]]; [myTableView addSubview:subView]; [subView release]; } When I run the app, the view is added, but I can't see the label, the progress bar and the activity indicator. What am I doing wrong?

    Read the article

  • [Java] Cluster Shared Cache

    - by GuiSim
    Hi everyone. I am searching for a java framework that would allow me to share a cache between multiple JVMs. What I would need is something like Hazelcast but without the "distributed" part. I want to be able to add an item in the cache and have it automatically synced to the other "group member" cache. If possible, I'd like the cache to be sync'd via a reliable multicast (or something similar). I've looked at Shoal but sadly the "Distributed State Cache" seems like an insufficient implementation for my needs. I've looked at JBoss Cache but it seems a little overkill for what I need to do. I've looked at JGroups, which seems to be the most promising tool for what I need to do. Does anyone have experiences with JGroups ? Preferably if it was used as a shared cache ? Any other suggestions ? Thanks ! EDIT : We're starting tests to help us decide between Hazelcast and Infinispan, I'll accept an answer soon. EDIT : Due to a sudden requirements changes, we don't need a distributed map anymore. We'll be using JGroups for a low level signaling framework. Thanks everyone for you help.

    Read the article

  • Resize view on iPhone when rotating

    - by BCBomb47
    I have an application with many views. I want only a couple of the views to be able to rotate to landscape when the device is rotated. I found out that I couldn't use (BOOL)shouldAutorotateToInterfaceOrientation because that would rotate every view in my app. I found a solution to this problem here on Stack Overflow but now I have another issue to deal with. The view rotates when I turn the device but it still shows the view as if it were still in portrait mode (straight up and down). The top and bottom of the view is cut off. Is there a way to have the view rotate and also adjust its size to fit the new orientation? I also found this but wasn't able to get it to work. Here's my code for that view: @implementation businessBank @synthesize webView, activityIndicator; - (void)viewDidLoad { [super viewDidLoad]; NSString *urlAddress = @"website_url"; NSURL *url = [NSURL URLWithString:urlAddress]; NSURLRequest *requestObj = [NSURLRequest requestWithURL:url]; [webView loadRequest:requestObj]; [[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(didRotate:) name:UIDeviceOrientationDidChangeNotification object:nil]; } - (void)didRotate:(NSNotification *)notification { UIDeviceOrientation orientation = [[notification object] orientation]; if (orientation == UIDeviceOrientationLandscapeLeft) { [self.view setTransform:CGAffineTransformMakeRotation(M_PI / 2.0)]; } else if (orientation == UIDeviceOrientationLandscapeRight) { [self.view setTransform:CGAffineTransformMakeRotation(M_PI / -2.0)]; } else if (orientation == UIDeviceOrientationPortraitUpsideDown) { [self.view setTransform:CGAffine TransformMakeRotation(M_PI)]; } else if (orientation == UIDeviceOrientationPortrait) { [self.view setTransform:CGAffineTransformMakeRotation(0.0)]; } }

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • Trouble using the ref keyword. Very newbie question!

    - by Sergio Tapia
    Here's my class: public class UserInformation { public string Username { get; set; } public string ComputerName { get; set; } public string Workgroup { get; set; } public string OperatingSystem { get; set; } public string Processor { get; set; } public string RAM { get; set; } public string IPAddress { get; set; } public UserInformation GetUserInformation() { var CompleteInformation = new UserInformation(); GetPersonalDetails(ref CompleteInformation); GetMachineDetails(ref CompleteInformation); return CompleteInformation; } private void GetPersonalDetails(ref UserInformation CompleteInformation) { } private void GetMachineDetails(ref UserInformation CompleteInformation) { } } I'm under the impression that the ref keyword tells the computer to use the same variable and not create a new one. Am I using it correctly? Do I have to use ref on both the calling code line and the actual method implementation?

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • How to implement menuitems that depend on current selection in WPF MVVM explorer-like application

    - by Doug
    I am new to WPF and MVVM, and I am working on an application utilizing both. The application is similar to windows explorer, so consider an app with a main window with menu (ShellViewModel), a tree control (TreeViewModel), and a list control (ListViewModel). I want to implement menu items such as Edit - Delete, which deletes the currently selected item (which may be in the tree or in the list). I am using Josh Smith's RelayCommand, and binding the menuitem to a DeleteItemCommand in the ShellViewModel is easy. It seems like implementing the DeleteItemCommand, however, requires some fairly tight coupling between the ShellViewModel and the two child view models (TreeViewModel and ListViewModel) to keep track of the focus/selection and direct the action to the proper child for implementation. That seems wrong to me, and makes me think I'm missing something. Writing a focus manager and/or selection manager to do the bookkeeping does not seem too hard, and could be done without coupling the classes together. The windowing system is already keeping track of which view has the focus, and it seems like I'd be duplicating code. What I'm not sure about is how I would route the command from the ShellViewModel down to either the ListViewModel or the TreeViewModel to do the actual work without making a mess of the code. Some day, the application will be extended to include more than two children, and I want the shell to be as ignorant of the children as possible to make that extension as painless as possible. Looking at some sample WPF/MVVM applications (Karl Shifflett's CipherText, Josh Smith's MVVM Demo, etc.), I haven't seen any code that does this (or I didn't understand it). Regardless of whether you think my approach is way off base or I'm just missing a small nuance, please share your thoughts and help me get back on track. Thanks!

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • cocoa hello world screensaver

    - by RW
    I have been studying NSView and as such I thought I would give a shot at a screen saver. I have been able to display and image in an NSView but I can't seen to modify this example code to display a simple picture in ScreenSaverView. http://www.mactech.com/articles/mactech/Vol.20/20.06/ScreenSaversInCocoa/ BTW great tutorial that works with Snow Leopard. I would think to simply display an image I would need something that looked like this... What am I doing wrong? // // try_screensaverView.m // try screensaver // #import "try_screensaverView.h" @implementation try_screensaverView - (id)initWithFrame:(NSRect)frame isPreview:(BOOL)isPreview { self = [super initWithFrame:frame isPreview:isPreview]; if (self) { [self setAnimationTimeInterval:1]; //refresh once per sec } return self; } - (void)startAnimation { [super startAnimation]; NSString *path = [[NSBundle mainBundle] pathForResource:@"leaf" ofType:@"JPG" inDirectory:@""]; image = [[NSImage alloc] initWithContentsOfFile:path]; } - (void)stopAnimation { [super stopAnimation]; } - (void)drawRect:(NSRect)rect { [super drawRect:rect]; } - (void)animateOneFrame { ////////////////////////////////////////////////////////// //load image and display This does not scale the image NSRect bounds = [self bounds]; NSSize newSize; newSize.width = bounds.size.width; newSize.height = bounds.size.height; [image setSize:newSize]; NSRect imageRect; imageRect.origin = NSZeroPoint; imageRect.size = [image size]; NSRect drawingRect = imageRect; [image drawInRect:drawingRect fromRect:imageRect operation:NSCompositeSourceOver fraction:1]; } - (BOOL)hasConfigureSheet { return NO; } - (NSWindow*)configureSheet { return nil; } @end

    Read the article

  • Drupal 6 node_view empty

    - by kristian nissen
    I'm trying to produce a page with a list of specific nodes but the node_view returns an empty string. This is my query: function events_upcoming() { $output = ''; $has_events = false; $res = pager_query(db_rewrite_sql("SELECT n.nid, n.created FROM {node} n WHERE n.type = 'events' AND n.status = 1 ORDER BY n.sticky DESC, n.created DESC"), variable_get('default_nodes_main', 10)); while ($n = db_fetch_object($res)) { $output .= node_view(node_load($n->nid), 1); $has_events = true; } if ($has_events) { $output .= theme('pager', NULL, variable_get('default_nodes_main', 10)); } return $output; } hook_menu (part of): 'events/upcoming' => array( 'title' => t('Upcoming Events'), 'page callback' => 'events_upcoming', 'access arguments' => array('access content'), 'type' => MENU_SUGGESTED_ITEM ), the implementation of hook_view: function events_view($node, $teaser = false, $page = false) { $node = node_prepare($node, $teaser); if ($page) { // TODO: Handle breadcrump } return $node; } now, if I add a var_dump($node) inside events_view the node is present and I can see the values I want, and if I add a var_dump inside while loop in events_upcoming I also get a node id from the query. the strange thing is, when I load localhost/events/upcoming I see the pager and nothing else. I have used the blog.module as a reference, but what am I missing here?

    Read the article

  • Sending series of images to display like a movie on iPhone

    - by unknownthreat
    Allow me to elaborate more. On the server, we will have a program that will take data from iPhone and process that data and produce series of images. Each time an image is generated, it will be send back to display on iPhone. I have done all of the things above using UDP, OpenGL, and such. It works. The images are transferred to iPhone and can be displayed, but it is slow. The image's resolution is around 320 x 420 and we send the image pixels by pixels. This naive implementation leads to a slow framerate. I can see around 2-3 frames per second. There are also some UDP packets dropped, and this is expected. Are there any sort of compression method available for something like this? Are there any other method that can make this better? NOTE: please don't just write "compression" as an answer, because we are aware that we will need to do it in some ways.

    Read the article

  • How save selected switch option

    - by Rkm
    System UIviewcontroller has button , Tap on button i need to fire Info Tableviewcontroller. Tableviewcontroller itself UISwitch. My question is I need to save last selected switch option ON/OFF in UISwitch how to set my control. @implementation Info // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } if (indexPath.section) { cell.textLabel.text = @"Sounds"; cell.selectionStyle = UITableViewCellSelectionStyleNone; UISwitch *switchView = [[UISwitch alloc] initWithFrame:CGRectZero]; cell.accessoryView = switchView; [switchView setOn:NO animated:NO]; [switchView addTarget:self action:@selector(switchChanged_Sounds:) forControlEvents:UIControlEventValueChanged]; [switchView release]; return cell ; } } - (void) switchChanged_Sounds:(id)sender { UISwitch* switchControl = sender; NSLog( @"The switchChanged_Sounds is %@", switchControl.on ? @"ON" : @"OFF" ); }

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >