Search Results

Search found 16644 results on 666 pages for 'traffic management'.

Page 173/666 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Quest releases NetVault Backup, Spotlight, Foglight, JClass, JProbe, Shareplex, Management Console and Authentication Services on Solaris 11

    - by user13333379
    Quest released the following products on Solaris 11 (SPARC, x64).: Quest NetVault Backup Server : v8.6.3, v8.6.1, v8.6  - Solaris 11, 10, 9 ; SPARC/x86/64 Quest NetVault Backup Client : v8.6.3, v8.6.1, v8.6  - Solaris 11, 10, 9 ; SPARC/x86/64 Quest Spotlight on Unix : v8.0 -Solaris 11, 10, 9  ; SPARC/x86/64 Quest Spotlight on Oracle : v9.0 - Solaris 11, 10, 9 ; SPARC/x86/64 Quest Authentication Services (formerly Vintela Authentication Services) : v4.0.3 - Solaris 11, 10, 9 ; SPARC/x86/64 Quest One Management Console for Unix (formerly Quest Identity Manager for Unix)  Solaris 11, 10, 9 ; SPARC/x86/64 Quest Foglight for Operating System : v5.6.5 -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest Foglight Agent Manager : v5.6.x -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest Foglight Cartridge for Infrastructure : v5.6.5 -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest JClass : v6.5 -Solaris 11, 10, 9  ; SPARC/x86/64  Quest JProbe : v9.5 -Solaris 11: x86  Quest Shareplex for Oracle : v7.6.3 : Solaris 11, 10, 9 ; SPARC/x86/64

    Read the article

  • Configuring SQL Server Management Studio to use Windows Integrated Authentication &hellip; from non-

    - by Enrique Lima
    Did you know you can pass your Windows credentials to SQL Server even when working from a workstation that is not joined to a domain? Here is how … From Start, then click All Programs, find Microsoft SQL Server (version 2005 or 2008). Once there, do a right-click on SQL Server Management Studio, then click on Properties Now, follow below to modify the entry for Target: Now the real task (we will be using the runas command) … Modify the shortcut’s target as follows, and remember to replace <domain\user> with the values that correspond to your environment : x64 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" x86 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" Since we modified the shortcut, we will need to fix the icon for SSMS.  We will fix it by pressing the Change Icon… button and pointing to the original “icon” providers. It is the executables for SSMS that hold the icon information, so we need to point to … x64 SQL Server 2008 %ProgramFiles% (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe x86 SQL Server 2008 %ProgramFiles%\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe When you start SSMS from a modified shortcut, you’ll be prompted for your domain password: SSMS will show up stating a different account in the username box, but the parameters from the configuration you are doing above do work and will pass on correctly.

    Read the article

  • What scenarios are implementations of Object Management Group (OMG) Data Distribution Service best suited for?

    - by mindcrime
    I've always been a big fan of asynchronous messaging and pub/sub implementations, but coming from a Java background, I'm most familiar with using JMS based messaging systems, such as JBoss MQ, HornetQ, ActiveMQ, OpenMQ, etc. I've also loosely followed the discussion of AMQP. But I recently became aware of the Data Distribution Service Specification from the Object Management Group, and found there are a couple of open-source implementations: OpenSplice OpenDDS It sounds like this stuff is focused on the kind of high-volume scenarios one tends to associate with financial trading exchanges and what-not. My current interest is more along the lines of notifications related to activity stream processing (think Twitter / Facebook) and am wondering if the DDS servers are worth looking into further. Could anyone who has practical experience with this technology, and/or a deep understanding of it, comment on how useful it is, and what scenarios it is best suited for? How does it stack up against more "traditional" JMS servers, and/or AMQP (or even STOMP or OpenWire, etc?) Edit: FWIW, I found some information at this StackOverflow thread. Not a complete answer, but anybody else finding this question might also find that thread useful, hence the added link.

    Read the article

  • Splitting a tetris game apart - where to put time-management?

    - by nightcracker
    I am creating a tetris game in C++ & SDL, and I'm trying to do it "good" by making it object-oriented and keeping scopes small. So far I have the following structure: A main with some lowlevel SDL set up and handling input A game class that keeps track of score and provides the interface for main (move block down, etc) A map class that keeps track of the current game field, which blocks are where. Used by the game class. A block class that consists of the current falling block, used by game. A renderer class abstracting low level SDL to a format where you render "tetris blocks". Used by map and block. Now I have a though time where to place the time-management of this game. For example, where should be decided when a block bumps the bottom of the screen how long it takes the current block locks in place and a new block spawns? I also have an other unrelated question, is there some place where you can find some standard data on tetris like standard score tables, rulesets, timings, etc?

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • EXC_BAD_ACCESS error on a UITableView with ARC?

    - by Arnaud Drizard
    I am building a simple app with a custom bar tab which loads the content of the ViewController from a UITableView located in another ViewController. However, every time I try to scroll on the tableview, I get an exc_bad_access error. I enabled NSzombies and guard malloc to get more info on the issue. In the console I get: "message sent to deallocated instance 0x19182f20" and after profiling I get: # Address Category Event Type RefCt Timestamp Size Responsible Library Responsible Caller 56 0x19182f20 FirstTabBarViewController Zombie -1 00:16.613.309 0 UIKit -[UIScrollView(UIScrollViewInternal) _scrollViewWillBeginDragging] Here is a bit of the code of the ViewController in which the error occurs: .h file: #import <UIKit/UIKit.h> #import "DataController.h" @interface FirstTabBarViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { IBOutlet UITableView* tabBarTable; } @property (strong, nonatomic) IBOutlet UIView *mainView; @property (strong, nonatomic) IBOutlet UITableView *tabBarTable; @property (nonatomic, strong) DataController *messageDataController; @end .m file: #import "FirstTabBarViewController.h" #import "DataController.h" @interface FirstTabBarViewController () @end @implementation FirstTabBarViewController @synthesize tabBarTable=_tabBarTable; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)awakeFromNib { [super awakeFromNib]; self.messageDataController=[[DataController alloc] init]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger) tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{ return [self.messageDataController countOfList]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"mainCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil){ cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; }; NSString *expenseAtIndex = [self.messageDataController objectInListAtIndex:indexPath.row]; [[cell textLabel] setText:expenseAtIndex]; return cell; } - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return NO; } @end This FirstTabBarViewController is loaded in the MainViewController with the following custom segue: #import "customTabBarSegue.h" #import "MainViewController.h" @implementation customTabBarSegue -(void) perform { MainViewController *src= (MainViewController *) [self sourceViewController]; UIViewController *dst=(UIViewController *)[self destinationViewController]; for (UIView *view in src.placeholderView.subviews){ [view removeFromSuperview]; } src.currentViewController =dst; [src.placeholderView addSubview:dst.view]; } @end The Datacontroller class is just a simple NSMutableArray containing strings. I am using ARC so I don't understand where the memory management error comes from. Does anybody have a clue? Any help much appreciated ;) Thanks!!

    Read the article

  • Opera Mobile, offline web app development, and memory

    - by Jake Krohn
    I'm developing a data collection app for use on a HP iPAQ 211. I'm doing it as an offline web app (go with what you know) using Opera Mobile 9.7 and Google Gears. Being it is an offline app, it is very dependent on Javascript for much of its behavior. I'm using the LocalServer, Database, and Geolocation components of Gears, as well as the JQuery core and a couple of plugins for form validation and other usability tweaks (no jQuery UI). I've tried to be conservative with my programming style and free up or close resources whenever possible, but Opera just slowly dies after about 10-20 minutes of use. The Javascript engine stops responding, pages only half-load, and eventually stop loading completely. I'm guessing it's a resource issue. Quitting and relaunching the browser solves the problem, but only temporarily. The iPAQ ships with 128 MB of RAM, about 85-87 MB of which is available immediately after a reset. With only Opera running, there still remains about 50 MB that is left unused. My questions are thus: Is it possible to get Opera to address this unused RAM? Are there configuration settings in Opera or in the Windows Registry itself that will help improve performance? I know where to tweak, but the descriptions of the opera:config variables that I've found are less than helpful. Is is laughable to ask about memory management and jQuery in the same sentence? If not, does anyone have any suggestions? Finally, are my plans too ambitious, given the platform I have to work with? I know that Gears and Windows Mobile 6 are on their way out, but they (theoretically) suffice for what I need to do. I could ditch them in favor of an iPhone/iPod Touch, Mobile Safari, and HTML5 but I'd like to try to make this work first. I didn't think that Opera was a dog when it comes to JS performance, but perhaps it's worse than I thought. That this motley collection of technologies works at all is a minor miracle, but it needs to be faster and more stable. I appreciate any suggestions.

    Read the article

  • How to make Flash 'play well with others'?

    - by Sensei James
    What up fam. So this isn't a question asking about memory management schemes; for those of you who may not know, the Flash Virtual Machine relies on garbage collection by using reference counting and mark and sweep (for good coverage of these topics, check out Grant Skinner's article and presentation). And yes, Flash also provides the "delete" operator, which can (unfortunately only) be used to remove the properties of dynamic objects. What I want to know is how to make it so that Flash programs don't continue to consume CPU and memory while running in the background (save loading content or communicating remotely, for example). The motivation for this question comes in part from Apple's ban on cross compiled applications (in its SDK 4) on the grounds that they do not behave as predicted with the multitasking feature central to iPhone OS 4. My intention is not only to make Flash programs that will 'pass muster' as far as multitasking in iPhone OS 4, but also to simply make better (behaving) Flash programs. Put another way, how might a Flash application mimic the multitasking feature of iPhone OS 4? Does the Flash API provide the means for a developer to put their applications to 'sleep' while other programs run, and then to 'awaken' them just as quickly? In our own program, we might do something as crude as detecting when the user has been idle (no mouse motion or key press) for (say) four seconds: var idle_id:uint = setInterval(4000, pause_program); var current_movie_clip:MovieClip; var current_frame:uint; ... // on Mouse move or key press... clearInterval(idle_id); idle_id = setInterval(4000, pause_program); ... function pause_program():void { current_movie_clip = event.target as MovieClip; current_frame = current_movie_clip.currentFrame; MovieClip(root).gotoAndStop("program_pause_screen"); } (on the program pause screen) resume_button.addEventListener(MouseEvent.CLICK, resume_program); function resume_program(event:MouseEvent) { current_movie_clip.gotoAndPlay(current_frame); } If that's the right idea, what's the best way to detect that an application should be shelved? And, more importantly, is it possible for Flash Player to detect that some of its running programs are idle, and to similarly shelve them until the user performs an action to resume them? (Please feel free to answer as much or as little of the many questions I've posed.)

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • DOM memory issue with IE8 (inserting lots of JSON data)

    - by okie.floyd
    i am developing a small web-utility that displays some data from some database tables. i have the utility running fine on FF, Safari, Chrome..., but the memory management on IE8 is horrendous. the largest JSON request I do will return information to create around 5,000 or so rows in a table within the browser (3 columns in the table). i'm using jquery to get the data (via getJSON). to remove the old/existing table, i'm just doing a $('#my_table_tbody').empty(). to add the new info to the table, within the getJSON callback, i am just appending each table row that i am creating to a variable, and then once i have them all, i am using $('#my_table_tbody').append(myVar) to add it to the existing tbody. i don't add the table rows as they are created because that seems to be a lot slower than just adding them all at once. does anyone have any recommendation on what someone should do who is trying to add thousands of rows of data to the DOM? i would like to stay away from pagination, but i'm wondering if i don't have a choice. Update 1 So here is the code I was trying after the innerHTML suggestion: /* Assuming a div called 'main_area' holds the table */ document.getElementById('main_area').innerHTML = ''; $.getJSON("my_server", {my: JSON, args: are, in: here}, function(j) { var mylength = j.length; var k =0; var tmpText = ''; tmpText += /* Add the table, thead stuff, and tbody tags here */; for (k = mylength - 1; k = 0; k--) { /* stack overflow wont let me type greater than & less than signs here, so just assume that they are there. */ tmpText += 'tr class="' + j[k].row_class . '" td class="col1_class" ' + j[k].col1 + ' /td td class="col2_class" ' + j[k].col2 + ' /td td class="col3_class" ' + j[k].col3 + ' /td /tr'; } document.getElementById('main_area').innerHTML = tmpText; } That is the gist of it. I've also tried using just a $.get request, and having the server send the formatted HTML, and just setting that in the innerHTML (i.e. document.getElementById('main_area').innerHTML = j;). thanks for all of the replies. i'm floored with the fact that you all are willing to help.

    Read the article

  • Unique_ptr compiler errors

    - by Godric Seer
    I am designing and entity-component system for a project, and C++ memory management is giving me a few issues. I just want to make sure my design is legitimate. So to start I have an Entity class which stores a vector of Components: class Entity { private: std::vector<std::unique_ptr<Component> > components; public: Entity() { }; void AddComponent(Component* component) { this -> components.push_back(std::unique_ptr<Component>(component)); } ~Entity(); }; Which if I am not mistaken means that when the destructor is called (even the default, compiler created one), the destructor for the Entity, will call ~components, which will call ~std::unique_ptr for each element in the vector, and lead to the destruction of each Component, which is what I want. The component class has virtual methods, but the important part is its constructor: Component::Component(Entity parent) { parent.addComponent(this) // I am not sure if this would work like I expect // Other things here } As long as passing this to the method works, this also does what I want. My confusion is in the factory. What I want to do is something along the lines of: std::shared_ptr<Entity> createEntity() { std::shared_ptr<Entity> entityPtr(new Entity()); new Component(*parent); // Initialize more, and other types of Components return entityPtr; } Now, I believe that this setup will leave the ownership of the Component in the hands of its Parent Entity, which is what I want. First a small question, do I need to pass the entity into the Component constructor by reference or pointer or something? If I understand C++, it would pass by value, which means it gets copied, and the copied entity would die at the end of the constructor. The second, and main question is that code based on this sample will not compile. The complete error is too large to print here, however I think I know somewhat of what is going on. The compiler's error says I can't delete an incomplete type. My Component class has a purely virtual destructor with an implementation: inline Component::~Component() { }; at the end of the header. However since the whole point is that Component is actually an interface. I know from here that a complete type is required for unique_ptr destruction. The question is, how do I work around this? For reference I am using gcc 4.4.6.

    Read the article

  • What Can I Do To One Of My Team Number (Good Friend As Well) Who Lost His Passion.

    - by skyflyer
    It seems this question is not program related, but there are lot of similar questions. So please bear with me! By the way, I am programmer and my team is also charging a software project. And SO is the only place which solved me lot of thorny troubles!THANK YOU GUYS! I joined my company with him years ago. At that time he was quite passionate on his job which is a front-end development. He gave us lot of useful suggestions concerning his work like design. And I believed he was a smart guy. I believe he still is smart too by the way. One years later, however, he seemed lost his passion and fooling around every day, did not care about his work any more and produced poorwork. Even worse he literally stopped learning new skills and honing his work related skills. For me it is horrible, we got to keep abreast with new technology development, otherwise we will be throw out. Since we were just coworkers, I did not care about it too much except mentioned my thoughts several times. But last month, we resembled a new group and assigned very important project. And I am the team leader, sadly! My boss gave me lot of support and expectation as well. I did a pretty good job before and I am very optimism to our future. But as a team, if my team does not work hard, we will be doomed to failure no matter how hard I work and push. In order to revitalize his passion, I tried couple of ways like talking to him about my concern and my boss's angry. I offered his new task which is quite new to him. I even persuaded my boss to give him new incentive package. But all of them knocked wall. His reaction was just he did not care. Even worse he did not want to talk about his situation. I want to be hard on him, but since we are friends and coworkers, I really can not see it will work. Even it works, I can not so quickly change my self from friend and coworker into manager. As a novice in management, I am really overwhelmed! I do not want get him fired, we are friends and I do not see him fired as my team number. What can I do? Thank you guys!

    Read the article

  • How to overwrite an array of char pointers with a larger list of char pointers?

    - by Casey
    My function is being passed a struct containing, among other things, a NULL terminated array of pointers to words making up a command with arguments. I'm performing a glob match on the list of arguments, to expand them into a full list of files, then I want to replace the passed argument array with the new expanded one. The globbing is working fine, that is, g.gl_pathv is populated with the list of expected files. However, I am having trouble copying this array into the struct I was given. #include <glob.h> struct command { char **argv; // other fields... } void myFunction( struct command * cmd ) { char **p = cmd->argv; char* program = *p++; // save the program name (e.g 'ls', and increment to the first argument glob_t g; memset(&g, 0, sizeof(g)); g.gl_offs = 1; int res = glob(*p++, GLOB_DOOFFS, NULL, &g); glob_handle_res(res); while (*p) { res = glob(*p, GLOB_DOOFFS | GLOB_APPEND, NULL, &g); glob_handle_res(res); } if( g.gl_pathc <= 0 ) { globfree(&g); } cmd->argv = malloc((g.gl_pathc + g.gl_offs) * sizeof *cmd->argv); if (cmd->argv == NULL) { sys_fatal_error("pattern_expand: malloc failed\n");} // copy over the arguments size_t i = g.gl_offs; for (; i < g.gl_pathc + g.gl_offs; ++i) cmd->argv[i] = strdup(g.gl_pathv[i]); // insert the original program name cmd->argv[0] = strdup(program); ** cmd->argv[g.gl_pathc + g.gl_offs] = 0; ** globfree(&g); } void command_free(struct esh_command * cmd) { char ** p = cmd->argv; while (*p) { free(*p++); // Segfaults here, was it already freed? } free(cmd->argv); free(cmd); } Edit 1: Also, I realized I need to stick program back in there as cmd-argv[0] Edit 2: Added call to calloc Edit 3: Edit mem management with tips from Alok Edit 4: More tips from alok Edit 5: Almost working.. the app segfaults when freeing the command struct Finally: Seems like I was missing the terminating NULL, so adding the line: cmd->argv[g.gl_pathc + g.gl_offs] = 0; seemed to make it work.

    Read the article

  • how to refactor user-permission system?

    - by John
    Sorry for lengthy question. I can't tell if this should be a programming question or a project management question. Any advice will help. I inherited a reasonably large web project (1 year old) from a solo freelancer who architected it then abandoned it. The project was a mess, but I cleaned up what I could, and now the system is more maintainable. I need suggestions on how to extend the user-permission system. As it is now, the database has a t_user table with the column t_user.membership_type. Currently, there are 4 membership types with the following properties: 3 of the membership types are almost functionally the same, except for the different monthly fees each must pay 1 of the membership type is a "fake-user" type which has limited access ( different business logic also applies) With regards to the fake-user type, if you look in the system's business logic files, you will see a lot of hard-coded IF statements that do something like if (fake-user) { // do something } else { // a paid member of type 1,2 or 3 // proceed normally } My client asked me to add 3 more membership types to the system, each of them with unique features to be implemented this month, and substantive "to-be-determined" features next month. My first reaction is that I need to refactor the user-permission system. But it concerns me that I don't have enough information on the "to-be-determined" membership type features for next month. Refactoring the user-permission system will take a substantive amount of time. I don't want to refactor something and throw it out the following month. I get substantive feature requests on a monthly basis that come out of the blue. There is no project road map. I've asked my client to provide me with a roadmap of what they intend to do with the new membership types, but their answer is along the lines of "We just want to do [feature here] this month. We'll think of something new next month." So questions that come to mind are: 1) Is it dangerous for me to refactor the user permission system not knowing what membership type features exist beyond a month from now? 2) Should I refactor the user permission system regardless? Or just continue adding IF statements as needed in all my controller files? Or can you recommend a different approach to user permission systems? Maybe role-based ? 3) Should this project have a road map? For a 1 year old project like mine, how far into the future should this roadmap project? 4) Any general advice on the best way to add 3 new membership types?

    Read the article

  • Objective-C++ Memory Problem

    - by Stephen Furlani
    Hello, I'm having memory woes. I've got a C++ Library (Equalizer from Eyescale) and they use the Traversal Visitor Pattern to allow you to add new functionality to their classes. I've finally figured out how it works, and I've got a Visitor that just returns the properties from one of the objects. (since I don't know how they're allocated). so. My little code does this: VisitorResult AGLContextVisitor::visit( Channel* channel ) { // Search through Nodes, Pipes until we get to the right window. // Add some code to make sure we find the right one? // Not executing the following code as C++ in gdb? eq::Window* w = channel->getWindow(); OSWindow* osw = w->getOSWindow(); AGLWindow* aw = (AGLWindow *)osw; AGLContext agl_ctx = aw->getAGLContext(); this->setContext(agl_ctx); return TRAVERSE_PRUNE; } So here's the problem. eq::Window* w = channel->getWindow(); (gdb) print w 0x0 BUT If I do this: (gdb) set objc-non-blocking-mode off (gdb) print w=channel->getWindow() 0x300effb9 // an honest memory location, and sets w as verified in the Debugger window of XCode. It does the same thing for osw. I don't get it. Why would something work in (gdb) but not in the code? The file is completely a cpp file, but it seems to be running in objc++, since I need to turn blocking off. Help!? I feel like I'm missing some memory-management basic thing here, either with C++ or Obj-C. [edit] channel-getWindow() is supposed to do this: /** @return the parent window. @version 1.0 */ Window* getWindow() { return _window; } The code also executes fine if I run it from a C++-only application. [edit] No... I tried creating a simple stand-alone program since I was tired of running it as a plugin. Messy to debug. And no, it doesn't run in the C++ program either. So I'm really at a loss as to what I'm doing wrong. Thanks, -- Stephen Furlani

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • Object relationships

    - by Hammerstein
    This stems from a recent couple of posts I've made on events and memory management in general. I'm making a new question as I don't think the software I'm using has anything to do with the overall problem and I'm trying to understand a little more about how to properly manage things. This is ASP.NET. I've been trying to understand the needs for Dispose/Finalize over the past few days and believe that I've got to a stage where I'm pretty happy with when I should/shouldn't implement the Dispose/Finalize. 'If I have members that implement IDisposable, put explicit calls to their dispose in my dispose method' seems to be my understanding. So, now I'm thinking maybe my understanding of object lifetimes and what holds on to what is just wrong! Rather than come up with some sample code that I think will illustrate my point, I'm going to describe as best I can actual code and see if someone can talk me through it. So, I have a repository class, in it I have a DataContext that I create when the repository is created. I implement IDisposable, and when my calling object is done, I call Dispose on my repository and explicitly call DataContext.Dispose( ). Now, one of the methods of this class creates and returns a list of objects that's handed back to my front end. Front End - Controller - Repository - Controller - Front End. (Using Redgate Memory Profiler, I take a snapshot of my software when the page is first loaded). My front end creates a controller object on page load and then makes a request to the repository sending back a list of items. When the page is finished loading, I call Dispose on the controller which in turn calls dispose on the context. In my mind, that should mean that my connection is closed and that I have no instances of my controller class. If I then refresh the page, it jumps to two 'Live' instances of the controller class. If I look at the object retention graph, the objects I created in my call to the list are being held onto ultimately by what looks like Linq. The controller/repository aside, if I create a list of objects somewhere, or I create an object and return it somewhere, am I safe to just assume that .NET will eventually come and clean things up for me or is there a best practice? The 'Live' instances suggest to me that these are still in memory and active objects, the fact that RMP apparently forces GC doesn't mean anything?

    Read the article

  • Is there a way to route all traffic from Android through a proxy/tunnel to my Tomato router?

    - by endolith
    I'd like to be able to connect my Android phone to public Wi-Fi points with unencrypted connections, but People can see what I'm doing by intercepting my radio transmissions People who own the access point can see what I'm doing. There are tools like WeFi and probably others to automatically connect to access points, but I don't trust random APs. I'd like all my traffic to go through an encrypted tunnel to my home router, and from there out to the Internet. I've done such tunnels from other computers with SSH/SOCKS and PPTP before. Is there any way to do this with Android? I've asked the same question on Force Close, so I'll change this question to be about both sides of the tunnel. More specifically: My phone now has CyanogenMod 4.2.3 My router currently has Tomato Version 1.25 I'm willing to change the router firmware, but I was having issues with DD-WRT disconnecting, which is why I'm using Tomato. Some possible solutions: SSH with dynamic SOCKS proxy: Android supposedly supports this through ConnectBot, but I don't know how to get it to route all traffic. Tomato supports this natively. I've been using this with MyEntunnel for my web browsing at work. Requires setting up each app to go through the proxy, though. PPTP: Android supports this natively. Tomato does not support this, unless you get the jyavenard mod and compile it? I previously used PPTP for web browsing at work and in China because it's native in Windows and DD-WRT. After a while I started having problems with it, then I started having problems with DD-WRT, so I switched to the SSH tunnel instead. Also it supposedly has security flaws, but I don't understand how big of a problem it is. IPSec L2TP: Android (phone) and Windows (work/China) both support this natively I don't know of a router that does. I could run it on my computer using openswan, but then there are two points of failure. OpenVPN: CyanogenMod apparently includes this, and now has an entry to create a new OpenVPN in the normal VPN interface, but I have no idea how to configure it. TunnelDroid apparently handles some of this. Future versions will have native support in the VPN settings? Tomato does not support this, but there are mods that do? I don't know how to configure this, either. TomatoVPN roadkill mod SgtPepperKSU mod Thor mod I could also run a VPN server on my desktop, I guess, though that's less reliable and presumably slower than running it in the router itself. I could change the router firmware, but I'm wary of more fundamental things breaking. Tomato has been problem-free for the regular stuff. Related: Anyone set up a SSH tunnel to their (rooted) G1 for browsing?

    Read the article

  • what reverse proxy server will direct traffic to healthy servers whose health is based on a result string

    - by joshua paul
    what reverse proxy server will direct traffic to healthy servers whose health is based on a result string?? ideally i'd like something like dnsmadeeasy or ultradns - lol - but for reverse proxy i have looked at pound, delegate, ha proxy, squid, varnish, nginx, apache, and cherokee but can't see that they will work - they only test for HTTP result code scenario client request www.aaa.com www.aaa.com is a reverse proxy reverse proxy looks at "test.php" on server 1.aaa.com, 2.aaa.com and 3.aaa.com for result string "OK" if the server is "OK" then proxy requests to them help!

    Read the article

  • How do I send traffic to specific IP addresses through VPN and others directly to the internet?

    - by keithwarren7
    I am running Windows 7 and using the Cisco VPN adapter to connect to a private network where I access resources starting with the IP address 172.. My problem is that when connected to the VPN all external traffic is routed through the VPN. I want to set things up so only certain IP addresses go through the VPN and everything else goes out over the local adapter and out to the internet as normal. How?

    Read the article

  • Two wireless NICs on a MacBook - can I bind all VPN traffic to one, and everything else to another?

    - by Ben Aston
    I frequently connect to my workplace over a VPN. I would like to continue watching videos from, say YouTube, whilst I work on the VPN, without degrading the available VPN bandwidth (say for an RDP session). Can I configure a second NIC to deal with only the VPN traffic, with everything else going over the primary? Specs as requested: Macbook Pro, OSX Snow Leopard, using the built in OSX VPN connectivity, the in-built airport card and a USB external wifi adapter.

    Read the article

  • Apache: Limit the Number of Requests/Traffic per IP?

    - by Ian Kern
    I would like to only allow one IP to use up to, say 1GB, of traffic per day, and if that limit is exceeded, all requests from that IP are then dropped until the next day. However, a more simple solution where the connection is dropped after a certain amount of requests would suffice. Is there already some sort of module that can do this? Or perhaps I can achieve this through something like iptables? Thanks

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >