Search Results

Search found 16168 results on 647 pages for 'shared state'.

Page 83/647 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • How to extract the current state of the registry? (in C/C++, XP)

    - by Doori Bar
    I was wondering how one might extract the current state of the registry, of Windows XP, in C or C++? (While the OS is active). I been trying to use BackupRead() on the registry-files, but it is impossible to CreateFile() them. I managed to create a Shadow Copy of the registry-files, but it wasn't the current state of the registry. Would appreciate any hint... (I know ERUNT is able to do it) Thanks, Doori Bar

    Read the article

  • How to make Windows 7 write to Samba shared folder?

    - by Jader Dias
    I can access and read a Samba folder from Windows 7. I've been following some sites instructions: My Windows 7 is configured like told below: http://www.tomshardware.com/forum/75-63-windows-samba-issue http://www.linuxquestions.org/questions/linux-server-73/windows-7-beta-1-and-samba-696990/ And my smb.conf has a shared folder, configured for do not require authentication, as the following site says so: http://ubuntuforums.org/showthread.php?t=658056 I also tried the following: chmod -R 775 sharedfolder chown -R someuser:somegroup sharedfolder in smb.conf : create mask = 0775 But I still get the message that I have no permission to write.

    Read the article

  • Which file system to use for portable hard drive shared among different operating systems?

    - by Jonathon Watney
    Something similar has been asked already but my criteria is a little different. I need to share a portable hard drive (USB/Firewire) between Mac OSX, Linux and Windows XP systems where the files being shared are sometimes 4GB. Is there a file system that is available out of the box on all these operating systems that support this and allows read/write access? If not, what's the next best solution in terms of installing additional software on these operating systems?

    Read the article

  • How can I perform a syntax check on an .htaccess file in a shared hosting environment?

    - by Danny
    I have a build script (Perl) that modifies the .htaccess file when I deploy my applications. As a double-check I'd like to be able to perform some sort of syntax checking on the created .htaccess file. I am familiar with the idea of using apachectl -t however, I am in a shared hosting environment and because of file access restrictions I cannot read certain configuration files specified by the sysadmins. Apachectl simply will not work in this regard. Ideas or suggestions welcome.

    Read the article

  • How can I automatically delete /tmp folder on shared drive?

    - by Matt
    We have a /tmp folder that people use for temporary stuff. It can be anything and any file. We want to make it so that this automatically deletes (or preferably MOVES to another folder on the same shared drive) all the files that haven't been accessed in the last two weeks. This should happen weekly on a schedule that I don't have to manually do. Is there software out there that does this? Anyone have a script possibly? Server 2008R2

    Read the article

  • What alternatives are available for shared folders encryption in Windows 2003 Server?

    - by snakepitar
    People in our company asked to encrypting some of the shared folders published in a local Windows 2003 File Server. The requirements are: Encrypt the files, so only a user or group or users can open them Avoid password protected files. The encryption process should be transparent to the users Though files are encrypted, the backup software (BackupExec) must be able to copy and access binary for verification Cannot install tools/software in user's PCs, they want this to work automatically As we have very little experience managing servers, we'll be grateful for any help or suggestion offered.

    Read the article

  • On my Mac, under the 'Shared' folder it shows another computer in my house, am I hacked?

    - by user27449
    I didnt' setup any connection to another computer in my house (its a PC), and I just noticed under my 'Shared' folder in the file explorer on my Mac laptop I see the name of the PC. How could this have shown up when I didn't even try to connect to it before? Could I possibly be hacked or is this normal it just scanned our internal wireless network? I havent' setup any kind of network really, just have a wireless modem that other computers share.

    Read the article

  • HTML5/JS - Choppy Game Loop

    - by Rikonator
    I have been experimenting with HTML5/JS, trying to create a simple game when I hit a wall. My choice of game loop is too choppy to be actually of any use in a game. I'm trying for a fixed time step loop, rendering only when required. I simply use a requestAnimationFrame to run Game.update which finds the elapsed time since the last update, and calls State.update to update and render the current state. State.prototype.update = function(ms) { this.ticks += ms; var updates = 0; while(this.ticks >= State.DELTA_TIME && updates < State.MAX_UPDATES) { this.updateState(); this.updateFrameTicks += State.DELTA_TIME; this.updateFrames++; if(this.updateFrameTicks >= 1000) { this.ups = this.updateFrames; this.updateFrames = 0; this.updateFrameTicks -= 1000; } this.ticks -= State.DELTA_TIME; updates++; } if(updates > 0) { this.renderFrameTicks += updates*State.DELTA_TIME; this.renderFrames++; if(this.renderFrameTicks >= 1000) { this.rps = this.renderFrames; this.renderFrames = 0; this.renderFrameTicks -= 1000; } this.renderState(updates*State.DELTA_TIME); } }; But this strategy does not work very well. This is the result: http://jsbin.com/ukosuc/1 (Edit). As it is apparent, the 'game' has fits of lag, and when you tab out for a long period and come back, the 'game' behaves unexpectedly - updates faster than intended. This is either a problem due to something about game loops that I don't quite understand yet, or a problem due to implementation which I can't pinpoint. I haven't been able to solve this problem despite attempting several variations using setTimeout and requestAnimationFrame. (One such example is http://jsbin.com/eyarod/1/edit). Some help and insight would really be appreciated!

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • Custom Gesture in cocos2d

    - by Lewis
    I've found a little tutorial that would be useful for my game: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ But I can't work out how to convert that gesture to work with cocos2d, I have found examples of pre made gestures in cocos2d, but no custom ones, is it possible? EDIT STILL HAVING PROBLEMS WITH THIS: I've added the code from Sentinel below (from SO), the Gesture and RotateGesture have both been added to my solution and are compiling. Although In the rotation class now I only see selectors, how do I set those up? As the custom gesture found in that project above looks like: header file for custom gesture: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end .m for custom gesture file: #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Then its initialised like this: // calculate center and radius of the control CGPoint midPoint = CGPointMake(image.frame.origin.x + image.frame.size.width / 2, image.frame.origin.y + image.frame.size.height / 2); CGFloat outRadius = image.frame.size.width / 2; // outRadius / 3 is arbitrary, just choose something >> 0 to avoid strange // effects when touching the control near of it's center gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; The selector below is also in the same file where the initialisation of the gestureRecogonizer: - (void) rotation: (CGFloat) angle { // calculate rotation angle imageAngle += angle; if (imageAngle > 360) imageAngle -= 360; else if (imageAngle < -360) imageAngle += 360; // rotate image and update text field image.transform = CGAffineTransformMakeRotation(imageAngle * M_PI / 180); [self updateTextDisplay]; } I can't seem to get this working in the RotateGesture class can anyone help me please I've been stuck on this for days now. SECOND EDIT: Here is the users code from SO that was suggested to me: Here is projec on GitHub: SFGestureRecognizers It uses builded in iOS UIGestureRecognizer, and don't needs to be integrated into cocos2d sources. Using it, You can make any gestures, just like you could, if you whould work with UIGestureRecognizer. For example: I made a base class Gesture, and subclassed it for any new gesture: //Gesture.h @interface Gesture : NSObject <UIGestureRecognizerDelegate> { UIGestureRecognizer *gestureRecognizer; id delegate; SEL preSolveSelector; SEL possibleSelector; SEL beganSelector; SEL changedSelector; SEL endedSelector; SEL cancelledSelector; SEL failedSelector; BOOL preSolveAvailable; CCNode *owner; } - (id)init; - (void)addGestureRecognizerToNode:(CCNode*)node; - (void)removeGestureRecognizerFromNode:(CCNode*)node; -(void)recognizer:(UIGestureRecognizer*)recognizer; @end //Gesture.m #import "Gesture.h" @implementation Gesture - (id)init { if (!(self = [super init])) return self; preSolveAvailable = YES; return self; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer { return YES; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)recognizer shouldReceiveTouch:(UITouch *)touch { //! For swipe gesture recognizer we want it to be executed only if it occurs on the main layer, not any of the subnodes ( main layer is higher in hierarchy than children so it will be receiving touch by default ) if ([recognizer class] == [UISwipeGestureRecognizer class]) { CGPoint pt = [touch locationInView:touch.view]; pt = [[CCDirector sharedDirector] convertToGL:pt]; for (CCNode *child in owner.children) { if ([child isNodeInTreeTouched:pt]) { return NO; } } } return YES; } - (void)addGestureRecognizerToNode:(CCNode*)node { [node addGestureRecognizer:gestureRecognizer]; owner = node; } - (void)removeGestureRecognizerFromNode:(CCNode*)node { [node removeGestureRecognizer:gestureRecognizer]; } #pragma mark - Private methods -(void)recognizer:(UIGestureRecognizer*)recognizer { CCNode *node = recognizer.node; if (preSolveSelector && preSolveAvailable) { preSolveAvailable = NO; [delegate performSelector:preSolveSelector withObject:recognizer withObject:node]; } UIGestureRecognizerState state = [recognizer state]; if (state == UIGestureRecognizerStatePossible && possibleSelector) { [delegate performSelector:possibleSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateBegan && beganSelector) [delegate performSelector:beganSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateChanged && changedSelector) [delegate performSelector:changedSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateEnded && endedSelector) { preSolveAvailable = YES; [delegate performSelector:endedSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateCancelled && cancelledSelector) { preSolveAvailable = YES; [delegate performSelector:cancelledSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateFailed && failedSelector) { preSolveAvailable = YES; [delegate performSelector:failedSelector withObject:recognizer withObject:node]; } } @end Subclass example: //RotateGesture.h #import "Gesture.h" @interface RotateGesture : Gesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed; @end //RotateGesture.m #import "RotateGesture.h" @implementation RotateGesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed { if (!(self = [super init])) return self; preSolveSelector = preSolve; delegate = target; possibleSelector = possible; beganSelector = began; changedSelector = changed; endedSelector = ended; cancelledSelector = cancelled; failedSelector = failed; gestureRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(recognizer:)]; gestureRecognizer.delegate = self; return self; } @end Use example: - (void)addRotateGesture { RotateGesture *rotateRecognizer = [[RotateGesture alloc] initWithTarget:self preSolveSelector:@selector(rotateGesturePreSolveWithRecognizer:node:) possibleSelector:nil beganSelector:@selector(rotateGestureStateBeganWithRecognizer:node:) changedSelector:@selector(rotateGestureStateChangedWithRecognizer:node:) endedSelector:@selector(rotateGestureStateEndedWithRecognizer:node:) cancelledSelector:@selector(rotateGestureStateCancelledWithRecognizer:node:) failedSelector:@selector(rotateGestureStateFailedWithRecognizer:node:)]; [rotateRecognizer addGestureRecognizerToNode:movableAreaSprite]; } I dont understand how to implement the custom gesture code at the start of this post into the rotateGesture class which is a subclass of the gesture class written by the SO user. Any ideas please? When I get 6 more rep I'll add a bounty to this.

    Read the article

  • Is WCF suitable for writing an application which is shared among applications?

    - by RPK
    I have developed and deployed few ASP.NET applications. Sometimes I want to stop the users from either inserting or updating a record when: Maintenance is going on. Stop operations due to payment delay. In one of my recent application I have implemented this feature to first check the database operations for locked status. If any of the above condition fulfils, database operations like insert and update are not carried out. I now need this feature in all the old applications and the future applications I build. I want to know whether WCF is suitable in this scenario as I want to share methods or an independent locking application among various other applications. Is WCF appropriate for this type of scenario?

    Read the article

  • Is it a good idea to create shared UI library that would render natively on different platforms?

    - by Maciej Donajski
    I am designing an application that has following flow: User designs a form using web application (J2EE backend application) The form is sent to mobile device (Android) Mobile device User fills out the form designed in 1. Results are synced with backend. One of my ideas is to create a common java UI library for creating the type of forms that I need. This library would also have a native renderers for different platforms (Web and Android would be implemented first). The whole point of it is to have a native experience on web and android side. Are there any existing solutions to meet the requirements that I have? Is it a good approach to achieve them?

    Read the article

  • Is defining every method/state per object in a series of UML diagrams representative of MDA in general?

    - by Max
    I am currently working on a project where we use a framework that combines code generation and ORM together with UML to develop software. Methods are added to UML classes and are generated into partial classes where "stuff happens". For example, an UML class "Content" could have the method DeleteFromFileSystem(void). Which could be implemented like this: public partial class Content { public void DeleteFromFileSystem() { File.Delete(...); } } All methods are designed like this. Everything happens in these gargantuan logic-bomb domain classes. Is this how MDA or DDD or similar usually is done? For now my impression of MDA/DDD (which this has been called by higherups) is that it severely stunts my productivity (everything must be done The Way) and that it hinders maintenance work since all logic are roped, entrenched, interspersed into the mentioned gargantuan bombs. Please refrain from interpreting this as a rant - I am merely curious if this is typical MDA or some sort of extreme MDA UPDATE Concerning the example above, in my opinion Content shouldn't handle deleting itself as such. What if we change from local storage to Amazon S3, in that case we would have to reimplement this functionality scattered over multiple places instead of one single interface which we can provide a second implementation for.

    Read the article

  • What are the appropriate mount options for a shared NTFS partition on an SSD in a dual boot Ubuntu/Windows setup?

    - by Andreas Jonsson
    I have Ubuntu 13.10 and Windows 7 installed in dual boot on a single SSD. In addition they share an NTFS partition where I put all my data and documents. What is the optimal way to mount this NTFS partition in /etc/fstab (considering performance and minimizing wear of the SSD)? Similar questions have been asked, but I could not find answers to this particular scenario. As I understand it, the mount option 'discard' is not supported for NTFS and so should not be used (although it is recommended here). Another often quoted mount option is 'noatime'. I use it for my ext4 partitions. Does it apply to NTFS? My current /etc/fstab line is: UUID=XXXXXXXXXXXXXXXX /dos ntfs defaults,nls=utf8,uid=1000,gid=1000 0 0

    Read the article

  • Overloading interface buttons, what are the best practices?

    - by XMLforDummies
    Imagine you'll have always a button labeled "Continue" in the same position in your app's GUI. Would you rather make a single button instance that takes different actions depending on the current state? private State currentState = State.Step1; private ContinueButton_Click() { switch(currentState) { case State.Step1: DoThis(); currentState = State.Step2; break; case State.Step2: DoThat(); break; } } Or would you rather have something like this? public Form() { this.ContinueStep2Button.Visible = false; } private ContinueStep1Button_Click() { DoThis(); this.ContinueStep1Button.Visible = false; this.ContinueStep2Button.Visible = true; } private ContinueStep2Button_Click() { DoThat(); }

    Read the article

  • Uploading or attaching files that located on a shared drive doesn't work?

    - by Alex
    I have this odd, quite minor, but annoying issue that I am quite perplexed about. Whenever I try to upload a file via my browser(let's say attach a file to an email in GMail), I click 'Browse' button and it opens standard file selection dialog, that doesn't show network drives. Further more if I try to drag a file from a network drive into GMail, it doesn't work either, it just doesn't let me do that. This issue has been around for quite sometime now, and I am just curious if this is something on my side or if it's a bug or a misconfiguration of some sort. FWIW, I am currently running 10.10, network drive is a samba share on NAS. This happens in FF and Chrome and this is only happens with Samba mounts. As a matter of fact, NFS volumes that are located on the same network operate perfectly fine.

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • How to review the current state of open source vs. closed source graphics drivers?

    - by Bucic
    How to know whether it's worth it to replace open source drivers installed by default with proprietary ones. Are there any benchmarks? Major known issues summaries. I don't mean 'at the time of writing this post'. I mean an up-to-date status on how the drivers compare. This page https://help.ubuntu.com/community/BinaryDriverHowto/ certainly doesn't do much on the matter, nor it even mentions Intel. EDIT: I've just learned there is no Intel proprietary driver because they made their drivers open source http://askubuntu.com/a/17395/29347

    Read the article

  • How to turn off Libnotify notifications only when sound is in muted state?

    - by Michael Butler
    I have a multimedia keyboard that allows me to easily mute the sound (Ubuntu 12.04). It would be nice to "link" this to also turn off libnotify messages that pop-up in the top right corner (i.e. Pidgin messages). So when Ubuntu is muted, no libnotify messages would pop up. When not muted, messages show as normal. Is this possible with a script of some kind or would it require changing source code?

    Read the article

  • Project Management Tool for developers and sysadmins: shared or separate?

    - by David
    Should a team of system administrators who are on a software development project share a project management tool with the developers or use their own separate one? We use Trac and I see the benefit in sharing since inter-team tasks can be maintained by a single system where there may be cross-over or misfiled bugs (e.g. an apparent bug which turns out to be a server configuration issue or a development cycle which needs a server to be configured before it can start) However sharing could be difficult since many system administration tasks don't coincide with a single development milestone if at all. So should a system administration team use a separate PM Tool or share the same one with the developers? If they should share, then how?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >