Search Results

Search found 11197 results on 448 pages for 'handle leak'.

Page 398/448 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • An open plea to Microsoft to fix the serializers in WCF.

    - by Scott Wojan
    I simply DO NOT understand how Microsoft can be this far along with a tool like WCF and it STILL tout it as being an "Enterprise" tool. For example... The following is a simple xsd schema with a VERY simple data contract that any enterprise would expect an "enterprise system" to be able to handle: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="Sample"     targetNamespace="http://tempuri.org/Sample.xsd"     elementFormDefault="qualified"     xmlns="http://tempuri.org/Sample.xsd"     xmlns:mstns="http://tempuri.org/Sample.xsd"     xmlns:xs="http://www.w3.org/2001/XMLSchema">    <xs:element name="SomeDataElement">     <xs:annotation>       <xs:documentation>This documents the data element. This sure would be nice for consumers to see!</xs:documentation>     </xs:annotation>     <xs:complexType>       <xs:all>         <xs:element name="Description" minOccurs="0">           <xs:simpleType>             <xs:restriction base="xs:string">               <xs:minLength value="0"/>               <xs:maxLength value="255"/>             </xs:restriction>           </xs:simpleType>         </xs:element>       </xs:all>       <xs:attribute name="IPAddress" use="required">         <xs:annotation>           <xs:documentation>Another explanation!  WOW!</xs:documentation>         </xs:annotation>         <xs:simpleType>           <xs:restriction base="xs:string">             <xs:pattern value="(([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.){3}([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])"/>           </xs:restriction>         </xs:simpleType>       </xs:attribute>     </xs:complexType>  </xs:element>   </xs:schema>  An minimal example xml document would be: <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10"> </SomeDataElement> With the max example being:  <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10">  <Description>ddd</Description> </SomeDataElement> This schema simply CANNOT be exposed by WCF.  Let's list why:  svcutil.exe will not generate classes for you because it can't read an xsd with xs:annotation. Even if you remove the documentation, the DataContractSerializer DOES NOT support attributes so IPAddress would become an element this not meeting the contract xsd.exe could generate classes but it is a very legacy tool, generates legacy code, and you still suffer from the following issues: NONE of the serializers support emitting of the xs:annotation documentation.  You'd think a consumer would really like to have as much documentation as possible! NONE of the serializers support the enforcement of xs:restriction so you can forget about the xs:minLength, xs:maxLength, or xs:pattern enforcement. Microsoft... please, please, please, please look at putting the work into your serializers so that they support the very basics of designing enterprise data contracts!!

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Move on and look elsewhere, or confront the boss?

    - by Meister
    Background: I have my Associates in Applied Science (Comp/Info Tech) with a strong focus in programming, and I'm taking University classes to get my Bachelors. I was recently hired at a local company to be a Software Engineer I on a team of about 8, and I've been told they're looking to hire more. This is my first job, and I was offered what I feel to be an extremely generous starting salary ($30/hr essentially + benefits and yearly bonus). What got me hired was my passion for programming and a strong set of personal projects. Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites (http://xxx.xxx.xxx.xxx/) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it?

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • C# 5 Async, Part 3: Preparing Existing code For Await

    - by Reed
    While the Visual Studio Async CTP provides a fantastic model for asynchronous programming, it requires code to be implemented in terms of Task and Task<T>.  The CTP adds support for Task-based asynchrony to the .NET Framework methods, and promises to have these implemented directly in the framework in the future.  However, existing code outside the framework will need to be converted to using the Task class prior to being usable via the CTP. Wrapping existing asynchronous code into a Task or Task<T> is, thankfully, fairly straightforward.  There are two main approaches to this. Code written using the Asynchronous Programming Model (APM) is very easy to convert to using Task<T>.  The TaskFactory class provides the tools to directly convert APM code into a method returning a Task<T>.  This is done via the FromAsync method.  This method takes the BeginOperation and EndOperation methods, as well as any parameters and state objects as arguments, and returns a Task<T> directly. For example, we could easily convert the WebRequest BeginGetResponse and EndGetResponse methods into a method which returns a Task<WebResponse> via: Task<WebResponse> task = Task.Factory .FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Event-based Asynchronous Pattern (EAP) code can also be wrapped into a Task<T>, though this requires a bit more effort than the one line of code above.  This is handled via the TaskCompletionSource<T> class.  MSDN provides a detailed example of using this to wrap an EAP operation into a method returning Task<T>.  It demonstrates handling cancellation and exception handling as well as the basic operation of the asynchronous method itself. The basic form of this operation is typically: Task<YourResult> GetResultAsync() { var tcs = new TaskCompletionSource<YourResult>(); // Handle the event, and setup the task results... this.GetResultCompleted += (o,e) => { if (e.Error != null) tcs.TrySetException(e.Error); else if (e.Cancelled) tcs.TrySetCanceled(); else tcs.TrySetResult(e.Result); }; // Call the asynchronous method this.GetResult(); // Return the task from the TaskCompletionSource return tcs.Task; } We can easily use these methods to wrap our own code into a method that returns a Task<T>.  Existing libraries which cannot be edited can be extended via Extension methods.  The CTP uses this technique to add appropriate methods throughout the framework. The suggested naming for these methods is to define these methods as “Task<YourResult> YourClass.YourOperationAsync(…)”.  However, this naming often conflicts with the default naming of the EAP.  If this is the case, the CTP has standardized on using “Task<YourResult> YourClass.YourOperationTaskAsync(…)”. Once we’ve wrapped all of our existing code into operations that return Task<T>, we can begin investigating how the Async CTP can be used with our own code.

    Read the article

  • Make the Time

    - by WonderOfItAll
    Took the little one to the pool tonight for swim lessons. Okay, Okay. They're not really lessons so much as they are "Hey, here's a few bucks, let me rent out a small section of your pool to swim around with my little one" Saw a dad at the pool. Bluetooth on, iPad in hand, and two year old somewhere around there. Saw a mom at the pool. Arguing with her five year old to NOT take a shower after swimming. Bluetooth on, iPad in hand, work laptop open on stadium seats. Her reasoning for not wanting the child to shower "Look, I have to get this stuff to the office by 6:30, we don't have time for you to shower. Let's go" Wait, isn't the whole point of this little experience called Mommy and Me (or, as in my case, Daddy and Me). Wherein Mommy/Daddy is supposed to spend time with little one. Not with the Bluetooth. Not with the work laptop. Dad (yeah, the same dad from earlier), in the pool. Bluetooth off (it's not waterproof or I'm sure he would've had it on), two year old in hand and iPad somewhere put away. Getting frustrated with kid because he won't 'perform' on command. Here's a little exchange Kid: "I don't wanna get in the water" Dad: "Well, we're here for 30 minutes, get in the water" Kid: "No, don't wanna" Dad: "Fine, I'm getting in" and, true to his word, in he goes, off to swim. Kid: Crying Dad: "Well, c'mon" Kid: Walking to stands Dad: Ignoring kid Kid: At stands Dad: Out of pool, drying off. Frustrated. Grabs bag, grabs kid, leaves How sad. It really seems like I am living in a generation of parents who view their children as one big scheduled distraction to another. It's almost like the dad was saying "Look, little 2 year old boy, I have a busy scheduled. Right now my Outlook Calendar tells me that I have 30 mins to spend with you, so, let's go kid: PERFORM because I have the time" Really? Can someone please tell me when the hell this happened? When did spending time with your kid, spending time with your family, spending time with your spouse, etc... become a distraction? I've seen people at work all day Tweeting throughout the day, checked in with Four Square, IM up and running constantly so they can 'stay in touch' only to see these same folks come home and be irritated because their kids or their spouse wants to connect with the. I've seen these very same people leave the house, go to the corner bar/store/you-name-the-place to be 'alone' only to find them there, plugged in, tweeting away, etc, etc, etc I LOVE technology. I love working with technology. But I also know that I am a human being. A person who, by very definition, is a social being. I needed social interactions and contact--and, no, I'm not talking about the Social Graph kind of connections, I'm talking about those interactions which, *GASP* involve eye to eye contact and human contact. A recent study found that the number one complaint of kids is that they feel they have to compete with technology for their parents time and attention. The number one wish from high school kids? That there parents would turn off the computer/tv/cell phone at dinner. This, coming from high school kids. Shouldn't that tell you a whole helluva lot? So, do yourself a favor tomorrow. Plug into technology all day. Throw yourself into it. Be passionate about what you do. When you walk through the door to your family, turn it all off for 30 mins and be there with your loved ones. If you can manage to play Angry Birds, I'm sure you can handle being disconnected for 30 minutes. Make the time

    Read the article

  • Missing Fields and Default Values

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Dealing with Missing Fields and Default Values New fields and new default values are not propagated throughout the list. They only apply to new and updated items and not to items already entered. They are only prospective. We need to be able to deal with this issue. Here is a scenario. The user has an old list with old items and adds a new field. The field is not created for any of the old items. Trying to get its value raises an Argument Exception. Here is another: a default value is added to a field. All the old items, where the field was not assigned a value, do not get the new default value. The two can also happen in tandem – a new field is added with a default. The older items have neither. Even better, if the user changes the default value, the old items still carry the old defaults. Let’s go a bit further. You have already written code for the list, be it an event receiver, a feature receiver, a console app or a command extension, in which you span all the fields and run on selected items – some new (no problem) and some old (problems aplenty). Had you written defensive code, you would be able to handle the situation, including similar changes in the future. So, without further ado, here’s how. Instead of just getting the value of a field in an item – item[field].ToString() – use the function below. I use ItemValue(item, fieldname, “mud in your eye”) and if “mud in your eye” is what I get, I know that the item did not have the field.   /// <summary> /// Return the column value or a default value /// </summary> private static string ItemValue(SPItem item, string column, string defaultValue) {     try     {         return item[column].ToString();     }     catch (NullReferenceException ex)     {         return defaultValue;     }     catch (ArgumentException ex)     {         return defaultValue;     } } I also use a similar function to return the default and a funny default-default to ascertain that the default does not exist. Here it is:  /// <summary> /// return a fields default or the "default" default. /// </summary> public static string GetFieldDefault(SPField fld, string defValue) {     try     {         // -- Check if default exists.         return fld.DefaultValue.ToString();     }     catch (NullReferenceException ex)     {         return defValue;     }     catch (ArgumentException ex)     {         return defValue;     } } How is this defensive? You have trapped an expected error and dealt with it. Therefore the program did not stop cold in its track and the required code ran to its end. Now, take a further step - write to a log (See Logging – a log blog). Read your own log every now and then, and act accordingly. That’s all Folks!

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • how to keep display tick rate steady when using continuous collision detection?

    - by nas Ns
    (I've just found about this forum). I hope it is ok to repost my question again here. I posted this question at stackoverflow, but it looks like I might get better help here. Here is the question: I've implemented basic particles motion simulation with continuous collision detection. But there is small issue in display. Assume simple case of circles moving inside square. All elastic collisions. no firction. All motion is constant speed. No forces are involved, no gravity. So when a particle is moving, it is always moving at constant speed (in between collisions) What I do now is this: Let the simulation time step be 1 second (for example). This is the time step simulation is advanced before displaying the new state (unless there is a collision sooner than this). At start of each time step, time for the next collision between any particles or a particle with a wall is determined. Call this the TOC time; let’s say TOC was .5 seconds in this case. Since TOC is smaller than the standard time step, then the system is moved by TOC and the new system is displayed so that the new display shows any collisions as just taking place (say 2 circles just touched each other’s, or a circle just touched a wall) Next, the collision(s) are resolved (i.e. speeds updated, changed directions etc..). A new step is started. The same thing happens. Now assume there is no collision detected within the next 1 second (those 2 circles above will not be in collision any more, even though they are still touching, due to their speeds showing they are moving apart now), Hence, simulation time is advanced now by the full one second, the standard time step, and particles are moved on the screen using 1 second simulation time and new display is shown. You see what has just happened: One frame ran for .5 seconds, but the next frame runs for 1 second, may be the 3rd frame is displayed after 2 seconds, may be the 4th frame is displayed after 2.8 seconds (because TOC was .8 seconds then) and so on. What happens is that the motion of a particle on the screen appears to speed up or slow down, even though it is moving at constant speed and was not even involved in a collision. i.e. Looking at one particle on its own, I see it suddenly speeding up or slowing down, becuase another particle had hit a wall. This is because the display tick is not uniform. i.e. the frame rate update is changing, giving the false illusion that a particle is moving at non-constant speed while in fact it is moving at constant speed. The motion on the screen is not smooth, since the screen is not updating at constant rate. I am not able to figure how to fix this. If I want to show 2 particles at the moment of the collision, I must draw the screen at different times. Drawing the screen always at the same tick interval, results in seeing 2 particles before the collision, and then after the collision, and not just when they colliding, which looked bad when I tried it. So, how do real games handle this issue? How to display things in order to show collisions when it happen, yet keep the display tick constant? These 2 requirements seem to contradict each other’s.

    Read the article

  • How do you deal with poor management [closed]

    - by Sybiam
    I come from a company where during a project, we saw the client 3 time during the whole project. We were never informed when did the client came in office in order to discuss with him about his requirements. I did setup redmine and told them that if they have any request they can post an issue there. But they never really used redmine to publish anything. They would instead: harass a team member on the phone at any time of the day or night hand us over sheets of paper with new requests or changes hand us over new design (graphical) They requested how much time it would take us to finish the project, I gave them a date and a week to test everything and deployment. I calculated that time taking into account the current features we had to do. And then blamed us that our deadline was wrong and that we lied. But the truth is that one week before that deadline they added a couple of monster feature from nowhere and that week were we were supposed to test and deploy, my friends spent all day in the office changing all little things. After that project, my friend got some kind of depression and got scared everytime his phone rang. They kind of used him as a communication proxy. After that project of hell, (every body got pissed off on that project) as far as I know the designer who was working with us left work after that project and she had some kind of issue too with managers. My team also started looking for work somewhere else. At first I tried to get things straight with management, I tried to make a meeting to discuss about the communication issues and so on.. What really pissed me off and made me leave that job for good is the following. Me: "We have to discuss about what went wrong on the last project. It's quite important" Him: "Lets talk about it in a week or two. Just make a list of all the things you did wrong" Me: "We already have a new project and we want to prevent what happened on the last project to happen again" Him: "Just do it and well have our meeting in a week, make a list of all the thing you did wrong." It kind of ended there then he organized a meeting at a moment I wasn't unable to come. My friend discussed with him and tried to explained him that we really had to discuss about organization issue on how to manage a project. And his answer was pretty much: "During the meeting I don't want to ear how you want to us to manage a project but I want to know what you guys did wrong" After that I felt it wasn't even worth it discussing anything since they weren't even ready listening to us. Found a new job and I'm pretty happy with my choice. I'd like to know how you'd handle such situation. Is there anything to do to solve communication problem? After that project my friend got a depression and some other employee had their down too as far as I know. I wonder what else we can do other than leave these place as soon as possible. Feel sad for the people that are still there and get screamed at just because they need money in order to eat and finding an other job like that isn't that easy. note I died a little when our boss asked us to make a list of things we (programmers) did wrong. This is probably the stupidest request I ever got. If everybody thinks they did everything right, it doesn't mean that there is no problems. Individual problem are rarely the big issue. Colleagues help each others and solve theses issues to prevent problems.

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • C#&ndash;Using a delegate to raise an event from one class to another

    - by Bill Osuch
    Even though this may be a relatively common task for many people, I’ve had to show it to enough new developers that I figured I’d immortalize it… MSDN says “Events enable a class or object to notify other classes or objects when something of interest occurs. The class that sends (or raises) the event is called the publisher and the classes that receive (or handle) the event are called subscribers.” Any time you add a button to a Windows Form or Web app, you can subscribe to the OnClick event, and you can also create your own event handlers to pass events between classes. Here I’ll show you how to raise an event from a separate class to a console application (or Windows Form). First, create a console app project (you could create a Windows Form, but this is easier for this demo). Add a class file called MyEvent.cs (it doesn’t really need to be a separate file, this is just for clarity) with the following code: public delegate void MyHandler1(object sender, MyEvent e); public class MyEvent : EventArgs {     public string message; } Your event can have whatever public properties you like; here we’re just got a single string. Next, add a class file called WorkerDLL.cs; this will simulate the class that would be doing all the work in the project. Add the following code: class WorkerDLL {     public event MyHandler1 Event1;     public WorkerDLL()     {     }     public void DoWork()     {         FireEvent("From Worker: Step 1");         FireEvent("From Worker: Step 5");         FireEvent("From Worker: Step 10");     }     private void FireEvent(string message)     {         MyEvent e1 = new MyEvent();         e1.message = message;         if (Event1 != null)         {             Event1(this, e1);         }         e1 = null;     } } Notice that the FireEvent method creates an instance of the MyEvent class and passes it to the Event1 handler (which we’ll create in just a second). Finally, add the following code to Program.cs: static void Main(string[] args) {     Program p = new Program(args); } public Program(string[] args) {     Console.WriteLine("From Console: Creating DLL");     WorkerDLL wd = new WorkerDLL();     Console.WriteLine("From Console: Wiring up event handler");     WireEventHandlers(wd);     Console.WriteLine("From Console: Doing the work");     wd.DoWork();     Console.WriteLine("From Console: Done - press any key to finish.");     Console.ReadLine(); } private void WireEventHandlers(WorkerDLL wd) {     MyHandler1 handler = new MyHandler1(OnHandler1);     wd.Event1 += handler; } public void OnHandler1(object sender, MyEvent e) {     Console.WriteLine(e.message); } The OnHandler1 method is called any time the event handler “hears” an event matching the specified signature – you could have it log to a file, write to a database, etc. Run the app in debug mode and you should see output like this: You can distinctly see which lines were written by the console application itself (Program.cs) and which were written by the worker class (WorkerDLL.cs). Technorati Tags: Csharp

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • Is there a way to install i386 packages w/ it's i386 dependencies?

    - by foh1981
    I'm using Ubuntu 11.10 64-bit and I wish to install the CAD application DraftSight, which as of now only come in a 32-bit .deb file. I have installed this with some success before, but since 11.10 supports multiarch I would like to install the i386 versions of DraftSights dependencies. Ubuntu Software Center cannot handle the file, nor gdebi-gtk ('wrong architecture'). I can use dpkg with --force-architecture but there's A LOT of dependencies which I need to manually install afterward. Is there a way to automatically install these? Or semi-automatically with a script of some sort? (I'm thinking something along the lines of extracting the dependencies and adding :i386 and then feed that to apt-get or something...) Below is the output of dpkg-deb --info of the package in question. Package: dassault-systemes-draftsight Version: 2011.7.1198 Section: applications Priority: extra Architecture: i386 Pre-Depends: libexpat1 (>=2.0.1-4), libglib2.0-0 (>=2.22.3-0), libpcre3 (>=7.8-3), libselinux1 (>=2.0.85-2), zlib1g (>=1:1.2.3.3.dfsg-13), libc6 (>=2.10.1-0), libx11-6 (>=2:1.2.2-1), libxau6 (>=1:1.0.4-2), libxcomposite1 (>=1:0.4.0-4), libxcursor1 (>=1:1.1.9-1build1), libxdamage1 (>=1:1.1.1-4), libxdmcp6 (>=1:1.0.2-3), libxext6 (>=2:1.0.99.1-0), libxfixes3 (>=1:4.0.3-2build1), libxi6 (>=2:1.2.1-2), libxinerama1 (>=2:1.0.3-2), libxrandr2 (>=2:1.3.0-2), libxrender1 (>=1:0.9.4-2), libatk1.0-0 (>=1.28.0-0), libcairo2 (>=1.8.8-2), libdirectfb-extra (>=1.2.7-2), libfontconfig1 (>=2.6.0-1), libfreetype6 (>=2.3.9-5), libgtk2.0-0 (>=2.18.3-1), libpango1.0-0 (>=1.26.0-1), libpixman-1-0 (>=0.14.0-1), libpng12-0 (>=1.2.37-1), libxcb-render-util0 (>=0.3.6-1), libxcb-render0 (>=1.4-1), libxcb1 (>=1.4-1), debconf (>= 1.1) | debconf-2.0 Depends: libcomerr2 (>=1.41.9-1), libdbus-1-3 (>=1.2.16-0), libexpat1 (>=2.0.1-4), libgcc1 (>=1:4.4.1-4), libgcrypt11 (>=1.4.4-2), libglib2.0-0 (>=2.22.3-0), libgpg-error0 (>=1.6-1), libkeyutils1 (>=1.2-10), libpcre3 (>=7.8-3), libuuid1 (>=2.16-1), zlib1g (>=1:1.2.3.3.dfsg-13), libc6 (>=2.10.1-0), libgl1-mesa-glx (>=7.6.0-1), libglu1-mesa (>=7.6.0-1), libice6 (>=2:1.0.5-1), libsm6 (>=2:1.1.0-2), libx11-6 (>=2:1.2.2-1), libxau6 (>=1:1.0.4-2), libxdamage1 (>=1:1.1.1-4), libxdmcp6 (>=1:1.0.2-3), libxext6 (>=2:1.0.99.1-0), libxfixes3 (>=1:4.0.3-2build1), libxrender1 (>=1:0.9.4-2), libxt6 (>=1:1.0.5-3), libxxf86vm1 (>=1:1.0.2-1), libaudio2 (>=1.9.2-1), libavahi-client3 (>=0.6.25-1), libavahi-common3 (>=0.6.25-1), libcups2 (>=1.4.1-5), libdrm2 (>=2.4.14-1), libfontconfig1 (>=2.6.0-1), libgnutls26 (>=2.8.3-2), libgssapi-krb5-2 (>=1.7dfsg~beta3-1), libk5crypto3 (>=1.7dfsg~beta3-1), libkrb5-3 (>=1.7dfsg~beta3-1), libkrb5support0 (>=1.7dfsg~beta3-1), libstdc++6 (>=4.4.1-4), libtasn1-3 (>=2.2-1), libxcb1 (>=1.4-1), sendmail Installed-Size: 284948 Maintainer: Dassault Systemes <[email protected]> Homepage: www.3ds.com Description: With DraftSight, you can easily create professional CAD drawings. Supported file formats are DWT, DXF and DWG.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 2 - Password Lists

    - by Argenis
      Digital World, Digital Locks One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access. As a DBA, you are very likely to manage some of these assets for your company - and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc. Have you given some thought to what you're doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them? Good-Ol’ Post-It Notes Under Your Keyboard If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel's level of encryption is good for your grandma's budget spreadsheet, not for a list of enterprise passwords. I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA's workstations. Now I have to break down that statement to you: - Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read? - Access controlled: This one is pretty much self-explanatory. - Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you. - Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring. - Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is? I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server's native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part. Enter The World Of Utilities There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone. Other solutions I've used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices. There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don't need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well. What are you doing to safeguard your passwords? Leave a comment below, and join the discussion! Cheers, -Argenis

    Read the article

  • Syntax of passing lambda causing hair loss (pulling out)

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • Why I switch from Asana.com

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/24/why-i-switch-from-asana.com.aspxI used Asana.com from 1-2 years. have nice experience to use it. it’s not so easy. When I started using it it’s make many confusion. Now I switch from it.   When I first time see I really didn’t understand how to make a private list. There is a icon on top click on it and make it private. After doing that I still not sure if this is working. There is a lot of confusion made that time. I discuss too much to figure out small small things. The UI is interesting but so hard to understand.  What I am looking for is just a list that I can hold private. I would like to share it only if I put them shared and put email address of person to hold them same list. Few days ago I see that My Win8 phone have a app that call Microsoft OneNote. The good thing of this MS app is that I can record my voice in the app. If someone want to make a list for future then he just need to say and this can be recorded.  This is awesome when you feel that Mobile keypad is just not so fast as a normal regular keyboard.   Google docs are another good option to handle this thing. Just make a word file and use it. share it with friend with many option. One best thing is this app have very simply UI then any other apps.   One more alternative is https://trello.com which you hear from joel on their blog http://www.joelonsoftware.com/items/2011/09/13.html There are many html5 and browser based, mobile based app. Many of them support multi platform feature. this means you can have them from PC to your Pocket. One good thing we all wanted is offline. if you are not online thing will be saved and push back to server when you will be online.   The biggest problem with some apps are they are attractive easy but hard to learn. Their one feature are not clearly defined what he does. This make frustration and confusion to user. When app are not simple to use people start stop trying to learn it. That’s all the problem I have with asana.com If you don’t want to try anything then what about Sticky Notes that is part of Windows 7. This app are still usable since you can store the text on it. If you know any good app to make a task list that provide access from tablet/mobile then put comment here. In the whole world of app there is a lot of app for doing this same thing differently. I mention few of them here. I hope this is nice to describe it.   Thanks for read my post.

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Managing software projects - advice needed

    - by Callum
    I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications. We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out. We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent. Our software specifications are not as thorough as they could be, but they always include at a minimum: Wireframe mockups for every form view A data dictionary of all field inputs Descriptions of any business rules that affect the system Descriptions of the outputs I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc. Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec. We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work. We need a way to track this work and be transparent with it. In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of "scope creep" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it. Happy to clarify anything as needed. I really appreciate anyone who takes the time to offer some advice. Thanks.

    Read the article

  • Can you/should you develop components for ASP.NET MVC?

    - by Vilx-
    Following from the previous question I've started to wonder - is it possible to implement "Components" in ASP.NET MVC (latest version)? And should you? Let's clarify what I mean with a "component". With that I mean a "control" (aka "widget"), similar to those that ASP.NET webforms is built upon. A gridview might be a good example. In webforms I can place on my form a datasource component (one line of code), a gridview component (another line of code) and bind them together (specify an attribute on the gridview). In the codebehind file I fill the datasource with data (a few lines of DB-querying code), and I'm all set. At this point the gridview is a fully functional standalone component. I can open the form, and I'll see all the data. I can sort it by clicking on the column headers; it is split into several pages; I can drag the column headers around and rearrange columns; I can turn on "grouping" mode; etc. And I don't need to write another line of code for any of it. The gridview, as a component, already has all the code tucked away in its classes and assemblies. I just place it on the form, initialize it, and it Just Works. At some times (like sorting or navigation to a different page) it will also perform ajax callbacks to the server, but those too will be handled internally, with my code having no knowledge at all about it. And then there are also events that I can attach if I want to get notified when something happens. In MVC I cannot see a way of doing this cleanly. Sure, there are the partial views, but those only handle half of the problem - they render the initial HTML. Some more can be achieved with client-side Javascript (like column re-arranging), but when the grid needs to do an ajax callback (say, to fetch the next page of data), my code will have to get involved and process that request. At best I guess I can provide some helper methods to process it, but I'll have to write the code that calls them, and also provide a controller method with signature matching the arguments of that callback. I guess that I could make some hacks with global events or special routes or something, but that just seems... hackish. Unelegant. Perhaps this is not the MVC way? Although I've completed one project in it, I'm still far from being an MVC expert. But then what is? In the intranet application that we're building there are dozens upon dozens of such grids. Naturally I want them all to have a unified look & behavior, and I don't want to repeat the same code all over the place. So what's the "MVC" approach to this problem?

    Read the article

  • Dual monitor not working completely in 12.10 after upgrade

    - by Mark Baldridge
    At 12.04, dual monitors worked perfectly. After upgrading to 12.10, the primary monitor works, the second monitor only partly works. I am sure there is some difference between the releases that I have missed setting properly. System settings - Displays show both correctly as Acer 22" monitors at 1680x1050 (16:10). An icon on monitor 2 is present, but elongated; almost an artifact, since other icons on the primary screen are absent, but this one icon is there on th second monitor. Selecting the icons on both screens exist. Painting is weird on monitor 2. Launcher exists and works on both screens, but even with sticky edges off, the cursor stops at the left edge of monitor 2. Clicking on text editor on screen 2 launcer will launch gedit there. If I drag it, it leaves a trail of after images like repaint is failing. If I drive the cursor on the launcher, the help tags like "LibreOffice Writer" appear, but stay on screen unless I drag the active gedit window over them. Then part of the help bubbles are overwritten, leaving behind after images of the gedit window on screen. What is really fascinating is that the System settings - Displays is now ignoring monitor selection, after allowing it earlier. Just before this, the help popup which said "Select a monitor to change its properties; drag to rearrange its placement" actually let me do that. Maybe a trick of where I grab the edge of the monitor in the Displays setting. I just found a working handle. When I drag monitor 1 to the right of monitor 2, "Apply" and confirm, both monitors work normally (although the right monitor lets the cursor slide off the right edge onto the left edge of monitor 1 - which sounds correct). Painting of windows does not leave an after image. However, success is only temporary. The setting survives the reboot, but painting on the left monitor, now monitor 2, now replicates the issues from before. The after image of the gedit window and the small window for "Are you sure you want to close all programs and restart the computer?" are still on monitor 2 (on the left now), even though they are not real windows, nor do they have processes behind them. Curiously, in Displays, the "green" monitor on the left in the display window is matched by the right monitor color in the monitor upper left corner. Probably makes sense as the one on the right is now monitor 1. If I repeat the "drag the left monitor to the right of the right monitor on the "Displays" window, things are oriented properly, with no display artifacts as I drag windows around either screen. Also the description bubbles that pop up are overwritten on both screens, so none of those artifacts either. This goodness does not survive a reboot, however. Have not tried logging out and back in. All of this after positing that the motherboard VGA and HDMI ports could have been the issue. So, I installed an e-GeForce 7600 GT Dual DVI (I know the web thinks it is not DVI, but VGA, but the connectors are DVI). No change to the weird behavior. The good parts continue to work, the weirdness also works, and swapping monitor positions seems to cure the issue. So, is there a setting I have missed? Given "swapping" monitor 1 and 2 on the System Settings... - Displays makes it work, just not across boot, I suspect so.

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Editing files without race conditions?

    - by user2569445
    I have a CSV file that needs to be edited by multiple processes at the same time. My question is, how can I do this without introducing race conditions? It's easy to write to the end of the file without race conditions by open(2)ing it in "a" (O_APPEND) mode and simply write to it. Things get more difficult when removing lines from the file. The easiest solution is to read the file into memory, make changes to it, and overwrite it back to the file. If another process writes to it after it is in memory, however, that new data will be lost upon overwriting. To further complicate matters, my platform does not support POSIX record locks, checking for file existence is a race condition waiting to happen, rename(2) replaces the destination file if it exists instead of failing, and editing files in-place leaves empty bytes in it unless the remaining bytes are shifted towards the beginning of the file. My idea for removing a line is this (in pseudocode): filename = "/home/user/somefile"; file = open(filename, "r"); tmp = open(filename+".tmp", "ax") || die("could not create tmp file"); //"a" is O_APPEND, "x" is O_EXCL|O_CREAT while(write(tmp, read(file)); //copy the $file to $file+".new" close(file); //edit tmp file unlink(filename) || die("could not unlink file"); file = open(filename, "wx") || die("another process must have written to the file after we copied it."); //"w" is overwrite, "x" is force file creation while(write(file, read(tmp))); //copy ".tmp" back to the original file unlink(filename+".tmp") || die("could not unlink tmp file"); Or would I be better off with a simple lock file? Appender process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "a"); write(file, "stuff"); close(file); close(lock); unlink(filename+".lock"); Editor process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "rw"); while(contents += read(file)); //edit "contents" write(file, contents); close(file); close(lock); unlink(filename+".lock"); Both of these rely on an additional file that will be left over if a process terminates before unlinking it, causing other processes to refuse to write to the original file. In my opinion, these problems are brought on by the fact that the OS allows multiple writable file descriptors to be opened on the same file at the same time, instead of failing if a writable file descriptor is already open. It seems that O_CREAT|O_EXCL is the closest thing to a real solution for preventing filesystem race conditions, aside from POSIX record locks. Another possible solution is to separate the file into multiple files and directories, so that more granular control can be gained over components (lines, fields) of the file using O_CREAT|O_EXCL. For example, "file/$id/$field" would contain the value of column $field of the line $id. It wouldn't be a CSV file anymore, but it might just work. Yes, I know I should be using a database for this as databases are built to handle these types of problems, but the program is relatively simple and I was hoping to avoid the overhead. So, would any of these patterns work? Is there a better way? Any insight into these kinds of problems would be appreciated.

    Read the article

  • How'd they do it: Millions of tiles in Terraria

    - by William 'MindWorX' Mariager
    I've been working up a game engine similar to Terraria, mostly as a challenge, and while I've figured out most of it, I can't really seem to wrap my head around how they handle the millions of interactable/harvestable tiles the game has at one time. Creating around 500.000 tiles, that is 1/20th of what's possible in Terraria, in my engine causes the frame-rate to drop from 60 to around 20, even tho I'm still only rendering the tiles in view. Mind you, I'm not doing anything with the tiles, only keeping them in memory. Update: Code added to show how I do things. This is part of a class, which handles the tiles and draws them. I'm guessing the culprit is the "foreach" part, which iterates everything, even empty indexes. ... public void Draw(SpriteBatch spriteBatch, GameTime gameTime) { foreach (Tile tile in this.Tiles) { if (tile != null) { if (tile.Position.X < -this.Offset.X + 32) continue; if (tile.Position.X > -this.Offset.X + 1024 - 48) continue; if (tile.Position.Y < -this.Offset.Y + 32) continue; if (tile.Position.Y > -this.Offset.Y + 768 - 48) continue; tile.Draw(spriteBatch, gameTime); } } } ... Also here is the Tile.Draw method, which could also do with an update, as each Tile uses four calls to the SpriteBatch.Draw method. This is part of my autotiling system, which means drawing each corner depending on neighboring tiles. texture_* are Rectangles, are set once at level creation, not each update. ... public virtual void Draw(SpriteBatch spriteBatch, GameTime gameTime) { if (this.type == TileType.TileSet) { spriteBatch.Draw(this.texture, this.realm.Offset + this.Position, texture_tl, this.BlendColor); spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(8, 0), texture_tr, this.BlendColor); spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(0, 8), texture_bl, this.BlendColor); spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(8, 8), texture_br, this.BlendColor); } } ... Any critique or suggestions to my code is welcome. Update: Solution added. Here's the final Level.Draw method. The Level.TileAt method simply checks the inputted values, to avoid OutOfRange exceptions. ... public void Draw(SpriteBatch spriteBatch, GameTime gameTime) { Int32 startx = (Int32)Math.Floor((-this.Offset.X - 32) / 16); Int32 endx = (Int32)Math.Ceiling((-this.Offset.X + 1024 + 32) / 16); Int32 starty = (Int32)Math.Floor((-this.Offset.Y - 32) / 16); Int32 endy = (Int32)Math.Ceiling((-this.Offset.Y + 768 + 32) / 16); for (Int32 x = startx; x < endx; x += 1) { for (Int32 y = starty; y < endy; y += 1) { Tile tile = this.TileAt(x, y); if (tile != null) tile.Draw(spriteBatch, gameTime); } } } ...

    Read the article

  • Validation and authorization in layered architecture

    - by SonOfPirate
    I know you are thinking (or maybe yelling), "not another question asking where validation belongs in a layered architecture?!?" Well, yes, but hopefully this will be a little bit of a different take on the subject. I am a firm believer that validation takes many forms, is context-based and varies at each level of the architecture. That is the basis for the post - helping to identify what type of validation should be performed in each layer. In addition, a question that often comes up is where authorization checks belong. The example scenario comes from an application for a catering business. Periodically during the day, a driver may turn in to the office any excess cash they've accumulated while taking the truck from site to site. The application allows a user to record the 'cash drop' by collecting the driver's ID, and the amount. Here's some skeleton code to illustrate the layers involved: public class CashDropApi // This is in the Service Facade Layer { [WebInvoke(Method = "POST")] public void AddCashDrop(NewCashDropContract contract) { // 1 Service.AddCashDrop(contract.Amount, contract.DriverId); } } public class CashDropService // This is the Application Service in the Domain Layer { public void AddCashDrop(Decimal amount, Int32 driverId) { // 2 CommandBus.Send(new AddCashDropCommand(amount, driverId)); } } internal class AddCashDropCommand // This is a command object in Domain Layer { public AddCashDropCommand(Decimal amount, Int32 driverId) { // 3 Amount = amount; DriverId = driverId; } public Decimal Amount { get; private set; } public Int32 DriverId { get; private set; } } internal class AddCashDropCommandHandler : IHandle<AddCashDropCommand> { internal ICashDropFactory Factory { get; set; } // Set by IoC container internal ICashDropRepository CashDrops { get; set; } // Set by IoC container internal IEmployeeRepository Employees { get; set; } // Set by IoC container public void Handle(AddCashDropCommand command) { // 4 var driver = Employees.GetById(command.DriverId); // 5 var authorizedBy = CurrentUser as Employee; // 6 var cashDrop = Factory.CreateCashDrop(command.Amount, driver, authorizedBy); // 7 CashDrops.Add(cashDrop); } } public class CashDropFactory { public CashDrop CreateCashDrop(Decimal amount, Employee driver, Employee authorizedBy) { // 8 return new CashDrop(amount, driver, authorizedBy, DateTime.Now); } } public class CashDrop // The domain object (entity) { public CashDrop(Decimal amount, Employee driver, Employee authorizedBy, DateTime at) { // 9 ... } } public class CashDropRepository // The implementation is in the Data Access Layer { public void Add(CashDrop item) { // 10 ... } } I've indicated 10 locations where I've seen validation checks placed in code. My question is what checks you would, if any, be performing at each given the following business rules (along with standard checks for length, range, format, type, etc): The amount of the cash drop must be greater than zero. The cash drop must have a valid Driver. The current user must be authorized to add cash drops (current user is not the driver). Please share your thoughts, how you have or would approach this scenario and the reasons for your choices.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >