Search Results

Search found 17336 results on 694 pages for 'developer events'.

Page 552/694 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • What are the differences between abstract classes, interfaces, and when to use them

    - by user66662
    Recently I have started to wrap my head around OOP, and I am now to the point where the more I read about the differences between Abstract classes and Interfaces the more confused I become. So far, neither can be instantiated. Interfaces are more or less structural blueprints that determine the skeleton and abstracts are different by being able to partially develop code. I would like to learn more about these through my specific situation. Here is a link to my first question if you would like a little more background information: What is a good design model for my new class? Here are two classes I created: class Ad { $title; $description $price; function get_data($website){ } function validate_price(){ } } class calendar_event { $title; $description $start_date; function get_data($website){ //guts } function validate_dates(){ //guts } } So, as you can see these classes are almost identical. Not shown here, but there are other functions, like get_zip(), save_to_database() that are common across my classes. I have also added other classes Cars and Pets which have all the common methods and of course properties specific to those objects (mileage, weight, for example). Now I have violated the DRY principle and I am managing and changing the same code across multiple files. I intend on having more classes like boats, horses, or whatever. So is this where I would use an interface or abstract class? From what I understand about abstract classes I would use a super class as a template with all of the common elements built into the abstract class, and then add only the items specifically needed in future classes. For example: abstract class content { $title; $description function get_data($website){ } function common_function2() { } function common_function3() { } } class calendar_event extends content { $start_date; function validate_dates(){ } } Or would I use an interface and, because these are so similar, create a structure that each of the subclasses are forced to use for integrity reasons, and leave it up to the end developer who fleshes out that class to be responsible for each of the details of even the common functions. my thinking there is that some 'common' functions may need to be tweaked in the future for the needs of their specific class. Despite all that above, if you believe I am misunderstanding the what and why of abstracts and interfaces altogether, by all means let a valid answer to be stop thinking in this direction and suggest the proper way to move forward! Thanks!

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • Alternatives to multiple inheritance for my architecture (NPCs in a Realtime Strategy game)?

    - by Brettetete
    Coding isn't that hard actually. The hard part is to write code that makes sense, is readable and understandable. So I want to get a better developer and create some solid architecture. So I want to do create an architecture for NPCs in a video-game. It is a Realtime Strategy game like Starcraft, Age of Empires, Command & Conquers, etc etc.. So I'll have different kinds of NPCs. A NPC can have one to many abilities (methods) of these: Build(), Farm() and Attack(). Examples: Worker can Build() and Farm() Warrior can Attack() Citizen can Build(), Farm() and Attack() Fisherman can Farm() and Attack() I hope everything is clear so far. So now I do have my NPC Types and their abilities. But lets come to the technical / programmatical aspect. What would be a good programmatic architecture for my different kinds of NPCs? Okay I could have a base class. Actually I think this is a good way to stick with the DRY principle. So I can have methods like WalkTo(x,y) in my base class since every NPC will be able to move. But now lets come to the real problem. Where do I implement my abilities? (remember: Build(), Farm() and Attack()) Since the abilities will consists of the same logic it would be annoying / break DRY principle to implement them for each NPC (Worker,Warrior, ..). Okay I could implement the abilities within the base class. This would require some kind of logic that verifies if a NPC can use ability X. IsBuilder, CanBuild, .. I think it is clear what I want to express. But I don't feel very well with this idea. This sounds like a bloated base class with too much functionality. I do use C# as programming language. So multiple inheritance isn't an opinion here. Means: Having extra base classes like Fisherman : Farmer, Attacker won't work.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Our Favorite Highlights from OpenWorld 2012

    - by Kathy.Miedema
    By Kathy Miedema and Misha Vaughan, Oracle Applications User Experience The Oracle Applications User Experience (UX) team’s activities around OpenWorld expand every year, but this year we certainly raised the bar.   Members of our team helped deliver three, separate, all-day training events in the week prior to OpenWorld. Our Fusion User Experience Advocates (FXA) and Applications UX Sales Ambassadors (SAMBA) have all-new material around the Oracle user experience to deliver at conferences in the coming year - Fusion Applications design patterns, mobile design patterns, and the new face of Fusion. We also delivered a hands-on workshop sharing user experience tools for our customers that is designed to answer this question: "If I have no UX staff, what do I do?" We also spent the weeks just before OpenWorld preparing to talk about the new face of Fusion Applications, a greatly simplified entry experience into Fusion Applications for self-service users, CRM users, and IT managers who want to change the look and feel quickly. Special thanks to Oracle ACE Director Floyd Teter for the first mention of our project.Jeremy Ashley, VP, Oracle Applications User Experience Customers may have seen one of the many OpenWorld session demos of the new face of Fusion, which will be available with Fusion Applications soon. It was shown in sessions by Oracle's Chris Leone, Anthony Lye, and our own Vice President, Jeremy Ashley, among others.   Leone reinforced the importance of user experience as one of three main design principles for Fusion Applications, emphasizing that Fusion was designed from the beginning to be intelligent, social, and mobile. User experience highlights of the new face of Fusion, he said, included the need for "zero training," and he called the experience "easy to use." He added that deploying it for HCM self-service would be effortless.  Customers take part in a usability lab tour during OpenWorld 2012. Customers also may have seen the new face of Fusion on the demogrounds or during one of our teams' chartered lab tours at the end of the week. We tested other new designs at our on-site lab in the Intercontinental Hotel, next to Moscone West. Applications User Experience team members show eye-tracking and mobile demos at OOW. We were also excited to kick off new branches of the Oracle Usability Advisory Board, which now has groups in Latin America and the Middle East, in addition to North America and EMEA.   And we were pleasantly surprised by the interest in one of our latest research projects, Oracle Voice, which is designed to enable faster data input for on-the-go users. We offer a big thank-you to the Nuance demopod for sharing the demo with OpenWorld attendees.  For more information on our program and products like the new face of Fusion, please comment below. 

    Read the article

  • New Worklist features on 12.1.3

    - by Vijay Shanmugam
    Following new Worklist features are available on E-Business Suite 12.1.3 via Patch 13646173. Ability to view comments on top of a notification If an action is performed on a notification such as Reassign, Request for Information or Provide Information, the recipient of the notification will see who performed the last action and the associated comment on top of the notification. Reassigning a request for information notification If an approver requests more information on a notification from it's submitter, the submitter now has two options Answer Request for More Information Transfer Request for More Information If the submitter thinks the requested information can be provided by another user, he/she can transfer the request to the other user. Please note that only Transfer is supported for Request for More Information. Once transferred, the submitter cannot access the notification and provide the requested information. Use actual sent date when reassigning a notification The Sent field in notification header always showed the date on which the notification was first created. If the notification was later reassigned, the Sent date was not updated to show the last action date. This caused problems in following scenario Approval notification was sent to JACK on 01-JAN-2012 JACK waited for 10 days before reassigning to JILL on 10-JAN-2012 JILL does not see the notification as sent on 10-JAN-2012, instead sees it as sent on 01-JAN-2012 Although the notification was originally created on 01-JAN-2012, it was sent to JILL only on 10-JAN-2012 The enhancement now shows the correct sent date in Worklist and Notification Details page. Figure 1 - Depicts all the above 3 features Related Action History for response required notification So far it was possible to embed Action History of an response-required notification into another FYI notification using #RELATED_HISTORY attribute (Please refer to Workflow Developer Guide for details about this attribute). The enhancement now enables developers to embed Action History of one response-required notification into another response-required notification. To embed Action History of one response-required notification into another, create message attribute #RELATED_HISTORY. To this attribute set a value during run-time in the following format. {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAMEThe TITLE, ITEM_TYPE and ITEM_KEY are optional values. TITLE is used as Related Action History header title. If TITLE is not present, then a default title "Related Action History" is shown. If ITEM_TYPE is present and ITEM_KEY is not, For Example: {TITLE}[ITEM_TYPE]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from parent item type of the current item. If both ITEM_TYPE and ITEM_KEY is present, For Example: {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from that specific instance activity. Figure 2 - Depicts Related Action History feature

    Read the article

  • How to deal with Warning : "Uncommittable transaction is detected at the end of the batch. The trans

    - by VishnuTiwariBlog
    Hi, If you are integrating with SQL Server and dealing with batch messages, you may encounter this problem. And this is evitable. The reason is the contention of resources. If your batch contains four messages and all the four messages have to be updated to SQL Server and then at the same time four process will contend for SQL server table and resources and the obvious result will be, few of your transaction will be left uncomitted and if you are not handling dehydration [not modifying the default property of the Dehydration] then your orchestration will dehydrate and will go for retry. If retry is set for every five minutes then after five minutes Port will send the message to the database. Reason for writing this post was as I did not want to see so many DEHYDRATED messages. And this was happening as Host Throttling was not set. Thus as soon as the BizTalk Process finds that SQL resources are unavailable it will go and dehydrate that process and process will go for retry. The contension of resources is unavoidable though we can fine tune the Dehydration setting. If you increase the time that an orchestration can be blocked at a subscription before being dehydrated, possibly you will give more time BizTalk Engine to handle to SQL resource availability. At least I solve the problem by fine tuning the Dehydration properties. Below is the section of config info which you need to add to the BTSNTsvc.exe.config.   <?xml version="1.0" ?> <configuration>        <configSections>               <section name="xlangs" type="Microsoft.XLANGs.BizTalk.CrossProcess.XmlSerializationConfigurationSectionHandler, Microsoft.XLANGs.BizTalk.CrossProcess" />        </configSections>        <runtime>               <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">                      <probing privatePath="BizTalk Assemblies;Developer Tools;Tracking" />               </assemblyBinding>        </runtime>        <xlangs>               <Configuration>                      <Dehydration MaxThreshold="1800" MinThreshold="1" ConstantThreshold="-1">                             <VirtualMemoryThrottlingCriteria OptimalUsage="900" MaximalUsage="1300" IsActive="true" />                             <PrivateMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="true" />                             <PhysicalMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="false" />                      </Dehydration>               </Configuration>        </xlangs> </configuration>

    Read the article

  • Resources to Help You Getting Up To Speed on ADF Mobile

    - by Joe Huang
    Hi, everyone: By now, I hope you would have a chance to review the sample applications and try to deploy it.  This is a great way to get started on learning ADF Mobile.  To help you getting started, here is a central list of "steps to get up to speed on ADF Mobile" and related resources that can help you developing your mobile application. Check out the ADF Mobile Landing Page on the Oracle Technology Network. View this introductory video. Read this Data Sheet and FAQ on ADF Mobile. JDeveloper 11.1.2.3 Download. Download the generic version of JDeveloper for installation on Mac. Note that there are workarounds required to install JDeveloper on a Mac. Download ADF Mobile Extension from JDeveloper Update Center or Here. Please note you will need to configure JDeveloper for Internet access (In HTTP Proxy preferences) in order the install the extension, as the installation process will prompt you for a license that's linked off Oracle's web site. View this end-to-end application creation video. View this end-to-end iOS deployment video if you are developing for iOS devices. Configure your development environment, including location of the SDK, etc in JDeveloper-Tools-Preferences-ADF Mobile dialog box.  The two videos above should cover some of these configuration steps. Check out the sample applications shipped with JDeveloper, and then deploy them to simulator/devices using the steps outlined in the video above.  This blog entry outlines all sample applications shipped with JDeveloper. Develop a simple mobile application by following this tutorial. Try out the Oracle Open World 2012 Hands on Lab to get a sense of how to programmatically access server data.  You will need these source files. Ask questions in the ADF/JDeveloper Forum. Search ADF Mobile Preview Forum for entries from ADF Mobile Beta Testing participants. For all other questions, check out this exhaustive and detailed ADF Mobile Developer Guide. If something does not seem right, check out the ADF Mobile Release Note. Thanks, Oracle ADF Mobile Product Management Team

    Read the article

  • How to manage a Closed Source High-Risk Project?

    - by abel
    I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app with a niche market. I plan to keep the source closed . However, I fear that my would-be employees could easily copy the codebase and use it /sell it to a third party especially when they switch jobs. The app development will take 4-6months and perhaps more and I may have to bring in people after the app goes live. How do I keep the source to myself. Are there techniques companies use to guard their source. I foresee disabling pendrives and dvd writers on my development machines, but uploading data or attaching the code in one's mail would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code-secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too. (Please add relevant tags) Update Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term.

    Read the article

  • SharePoint Thoughts

    - by Tim Murphy
    I was listening to .NET Rocks episode #713 and it got me thinking about a number of SharePoint related topics. I have been working with SharePoint since the 2001 product came out and have watched it evolve over the years.  Today SharePoint is one of the most powerful and flexible products in the market.  Of course that doesn’t mean there isn’t room for improvement (a lot of improvement in fact) and with much power comes much responsibility. My main gripe these days is that you have to develop on a server instance.  This adds a real barrier to entry for developers.  You either have to run VMWare or Hyper-V on your developer machine or actually develop on your dev server for most tasks.  Yes, there is a way to setup a Windows 7 machine with the SharePoint components but it is very hackish. Beyond that the tools in VS2010 are a great leap forward from past generations.  Not requiring a separate package creation tool is not the least of the improvements.  Better workflow and web part development have also eased the burden of many developers. The other thing the show brought up in my thoughts was more around usage.  Users want to be able to self server everything without regard to what affect that has on leveraging their data from a corporate perspective.  My coworkers who work on Lotus Notes ask why the user can’t just do what ever they want? Part of the reason is that those features have not been built, but the other part is that giving them those features is often like giving an infant a loaded hand gun.  You can do it but it doesn’t make it the smart thing to do. As with any tool that is going to be used in the enterprise it should be subject to governance.  If controls are not in place as they said in the episode of DNR the document libraries and I believe SharePoint in general starts to look as disarrayed and unusable as a shared drive.  Consider these factors before giving into every whim of the users.  You should be able to explain to them the tradeoffs of giving them full control versus being able to leverage the information they collect to the benefit of the organization. These are just a couple of the thoughts that were triggered by the show.  I’m sure there are more discussions that can be had.  Feel free to leave your comments about the pros and cons of SharePoint. del.icio.us Tags: .NET Rocks,SharePoint,software development

    Read the article

  • Many sources of movement in an entity system

    - by Sticky
    I'm fairly new to the idea of entity systems, having read a bunch of stuff (most usefully, this great blog and this answer). Though I'm having a little trouble understanding how something as simple as being able to manipualate the position of an object by an undefined number of sources. That is, I have my entity, which has a position component. I then have some event in the game which tells this entity to move a given distance, in a given time. These events can happen at any time, and will have different values for position and time. The result is that they'd be compounded together. In a traditional OO solution, I'd have some sort of MoveBy class, that contains the distance/time, and an array of those inside my game object class. Each frame, I'd iterate through all the MoveBy, and apply it to the position. If a MoveBy has reached its finish time, remove it from the array. With the entity system, I'm a little confused as how I should replicate this sort of behavior. If there were just one of these at a time, instead of being able to compound them together, it'd be fairly straightforward (I believe) and look something like this: PositionComponent containing x, y MoveByComponent containing x, y, time Entity which has both a PositionComponent and a MoveByComponent MoveBySystem that looks for an entity with both these components, and adds the value of MoveByComponent to the PositionComponent. When the time is reached, it removes the component from that entity. I'm a bit confused as to how I'd do the same thing with many move by's. My initial thoughts are that I would have: PositionComponent, MoveByComponent the same as above MoveByCollectionComponent which contains an array of MoveByComponents MoveByCollectionSystem that looks for an entity with a PositionComponent and a MoveByCollectionComponent, iterating through the MoveByComponents inside it, applying/removing as necessary. I guess this is a more general problem, of having many of the same component, and wanting a corresponding system to act on each one. My entities contain their components inside a hash of component type - component, so strictly have only 1 component of a particular type per entity. Is this the right way to be looking at this? Should an entity only ever have one component of a given type at all times?

    Read the article

  • Releasing the new Sample Browser Phone app

    - by Jialiang
    Originally posted on: http://geekswithblogs.net/Jialiang/archive/2014/06/05/releasing-the-new-sample-browser-phone-app.aspx Starting its journey in 2010, Sample Browser is achieving its tetralogy by releasing a Windows Phone version Sample Browser today. The new Windows Phone app is the fourth milestone of Sample Browser since we released the desktop version and the Visual Studio version in 2012 and the Windows Store version in 2013. This time, by providing a sample browser designed for a ‘walking’ platform in response to MVPs’ suggestions during last year’s MVP Global Summit, we are literally putting a world of code samples "at developers’ fingertips”. If you like to have a code gallery of over 7000 quality code samples in your pocket, then click here to download our Windows Phone Sample Browser and start a fantastic mobile experience. With Windows Phone version Sample Browser and the Internet, you can search for code samples on MSDN at anytime and anywhere you want, 24/7 and–even to bed. You can also check code sample details and share them with your friends. Compared to the other 3 pieces in the tetralogy (desktop version, Visual Studio version, and the Windows Store version), the Windows Phone version Sample Browser sells itself for convenience and instant connectivity. For those who need to reach code samples under mobile circumstances where no PCs is available, Windows Phone version Sample Browser will definitely be the right service you are seeking for. Aside from sharing samples via emails as the other 3 do, the Windows Phone version Sample Browser also allows you to share the sample via SMS and Near Field Communication (NFC).   What's Next Currently, the Windows Phone Sample Browser only supports online MSDN code searching, but we already plan to upgrade Sample Browser to allow users to do ‘Bing code search’, and add and manage their private code snippets.  We will also upgrade the app to universal app. Universal App is a new concept brought up in the Microsoft Build Developer Conference 2014. It is a new development model that allows for a single app to be deployed across multiple Windows devices such as Windows Phone, Windows 8.1, and XBox. Therefore, once we finish upgrading Sample Browser to a universal app, you can synchronize your own code snippets across different devices; You can also mark a code sample as favorite on your Windows Phone and continue to study the sample when you are on your desktop. By then, sharing data between platforms will be a piece of cake. Also, the user experience of Sample Browser on different platforms will be more consistent.  The best is yet to come!   We sincerely suggest you give Sample Browser a try (click here to download). If you love what you see in Sample Browser, please recommend it to your friends and colleagues. If you encounter any problems or have any suggestions for us, please contact us at [email protected]. Your precious opinions and comments are more than welcome.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • C++ and SDL resource management for 2D game

    - by KuruptedMagi
    My first question is about stateManagers. I do not use the singleton pattern (read many random posts with various reasons not to use it), I have gameStateManager which runs the pointer cCurrentGameState-render(), etc. I want to make a transitioning game, this engine should ideally cover both a platformer and a bird's eye RPG (with some recoding, I just mean the base engine), both of which will load different levels and events, such as world map, dungeon, shops, etc. So I then thought, rather then having to store all this data within all the states, I would break the engine into gameStates, and playStates... when gameState reaches gameStatePlay(), gameStatePlay simply runs the usual handleInput, logic, and render for the playStates, just as the low level gameStateManager does. This lets me store all the player data within the base playstate class without storing useless data in the gameStates. Now I have added a seperate mapEditor, which uses editorStates from gameStateEditor. Is this too much usage of the gameState concept? It seems to work pretty well for me, so I was wondering if I am too far off a common implementation of this. My second question is on image resources. I have my sprite class with nothing but static members, mainly loadImage, applySurface, and my screen pointer. I also have a map pairing imageName enums with actual SDL_Surface pointers, and one pairing clipNumber enums with a wrapper class for a vector of clips, so that each reference in the map can have different amounts of clips with different sizes. I thought it would be better to store all these images, and screen within one static body, since 20 different goblins all use the same sprite sheet, and all need to print to the same screen, and of course, this way I do not need to pass my screen reference to every little entity. The imageMap seems to work very well, I can even add the ability to search through the map at creation of entity type to see if a particular image at creation, creating if it doesnt exist, and destroying the image if the last entity that needs it was just destroyed. The vectored clip map however, seems to take too long to initialize, so if i run past the state that initializes them to fast, the game crashes <. Plus, the clip map call is half of this line =P SPRITE::applySurface( cEditorMap.cTiles[x][y].iX, cEditorMap.cTiles[x][y].iY, SPRITE::mImages[ IMAGE_TILEMAP ], SPRITE::screen, SPRITE::mImageClips[IMAGE_TILEMAP]->clips.at( cEditorMap.cTiles[x][y].iTileType ) ); Again, do I have the right idea? I like the imageMap, but am I better off with each entity storing its own clips? My last question is about collision detection. I only grasp the basics, will look at per-pixel and circular soon, but how can I determine which side the collision comes from with just the basic square collision detection, I tried breaking each entity into 4 collision zones, but that just gave me problems with walking through walls and the like <. Also, is per-pixel color collision a good way to decide what collision just occured, or is checking multiple colors for multiple entities too taxing each cycle?

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • Microsoft and Application Architectures

    Microsoft has dealt with several kinds of application architectures to include but not limited to desktop applications, web applications, operating systems, relational database systems, windows services, and web services. Because of the size and market share of Microsoft, virtually every modern language works with or around a Microsoft product. Some of the languages include: Visual Basic, VB.Net, C#, C++, C, ASP.net, ASP, HTML, CSS, JavaScript, Java and XML. From my experience, Microsoft strives to maintain an n-tier application standard where an application is comprised of multiple layers that perform specific functions, for example: presentation layer, business layer, data access layer are three general layers that just about every formally structured application contains. The presentation layer contains anything to do with displaying information to the screen and how it appears on the screen. The business layer is the middle man between the presentation layer and data access layer and transforms data from the data access layer in to useable information to be stored later or sent to an output device through the presentation layer. The data access layer does as its name implies, it allows the business layer to access data from a data source like MS SQL Server, XML, or another data source. One of my favorite technologies that Microsoft has come out with recently is the .Net Framework. This framework allows developers to code an application in multiple languages and compiles them in to one intermediate language called the Common Language Runtime (CLR). This allows VB and C# developers to work seamlessly together as if they were working in the same project. The only real disadvantage to using the .Net Framework is that it only natively runs on Microsoft operating systems. However, Microsoft does control a majority of the operating systems currently installed on modern computers and servers, especially with personal home computers. Given that the Microsoft .Net Framework is so flexible it is an ideal for business to develop applications around it as long as they wanted to commit to using Microsoft technologies and operating systems in the future. I have been a professional developer for about 9+ years now and have seen the .net framework work flawlessly in just about every instance I have used it. In addition, I have used it to develop web applications, mobile phone applications, desktop applications, web service applications, and windows service applications to name a few.

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • Are we ready for the Cloud computing era?

    - by andrewkatumba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} "Elite?" developer circles are abuzz with the notion of Cloud computing . The increasing bandwidth, the desire for faster and leaner operations and ofcourse the need for outsourcing non core it related business requirements e.g wordprocessing, spreadsheets, data backups. In strolls Chrome OS (am sure other similar OSes will join with their own wagons for us to jump on), offering just that, internet based services(more like a repository of), quick efficient and "reliable" and for the most part cheap and often time even free! And we all go rhapsodic!  It boils down to the age old dilemma, "if the cops are so busy protecting us then who will protect them" (even the folks back at Hollywood keep us reminded)! Who is going to ensure that these internet based services do not go down(either intentionally or by some malicious third party) leading to a multinational colossal disaster .At the risk of sounding pessimistic,  IT IS NOT AN ISSUE OF TRUST, this is but a mere case of Murphy's Law!  What then? Should the "cloud" be trusted to this extent at this time?  This is an era where challenges are rapidly solved with lightning promptness to "beat the competition", my hope is that our solutions are not just creating problems that we may not be able to solve!  Keeping my ear on the Ground.

    Read the article

  • Using Query Classes With NHibernate

    - by Liam McLennan
    Even when using an ORM, such as NHibernate, the developer still has to decide how to perform queries. The simplest strategy is to get access to an ISession and directly perform a query whenever you need data. The problem is that doing so spreads query logic throughout the entire application – a clear violation of the Single Responsibility Principle. A more advanced strategy is to use Eric Evan’s Repository pattern, thus isolating all query logic within the repository classes. I prefer to use Query Classes. Every query needed by the application is represented by a query class, aka a specification. To perform a query I: Instantiate a new instance of the required query class, providing any data that it needs Pass the instantiated query class to an extension method on NHibernate’s ISession type. To query my database for all people over the age of sixteen looks like this: [Test] public void QueryBySpecification() { var canDriveSpecification = new PeopleOverAgeSpecification(16); var allPeopleOfDrivingAge = session.QueryBySpecification(canDriveSpecification); } To be able to query for people over a certain age I had to create a suitable query class: public class PeopleOverAgeSpecification : Specification<Person> { private readonly int age; public PeopleOverAgeSpecification(int age) { this.age = age; } public override IQueryable<Person> Reduce(IQueryable<Person> collection) { return collection.Where(person => person.Age > age); } public override IQueryable<Person> Sort(IQueryable<Person> collection) { return collection.OrderBy(person => person.Name); } } Finally, the extension method to add QueryBySpecification to ISession: public static class SessionExtensions { public static IEnumerable<T> QueryBySpecification<T>(this ISession session, Specification<T> specification) { return specification.Fetch( specification.Sort( specification.Reduce(session.Query<T>()) ) ); } } The inspiration for this style of data access came from Ayende’s post Do You Need a Framework?. I am sick of working through multiple layers of abstraction that don’t do anything. Have you ever seen code that required a service layer to call a method on a repository, that delegated to a common repository base class that wrapped and ORMs unit of work? I can achieve the same thing with NHibernate’s ISession and a single extension method. If you’re interested you can get the full Query Classes example source from Github.

    Read the article

  • How to be successful at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Understanding exceptional cases

    - by Justin
    I've been studying the use of exceptions in various php projects (such as Doctrine and Zend Framework). Exceptions seem to be thrown when unordinary input/state occurs. A perfect example is Doctrine throwing an exception when you try to use a invalid query string. I think the creators of the doctrine api understood that first, you can't query data by using an invalid DQL statement, and a developer should immediately be warned that an error has occurred, rather then letting execution continue with the possibility of an error code going un-checked. I also bet that this simplifies reading the code. I can't think of a situation where you would want to use an invalid DQL statement, except unit testing. Since this is true, it's better to avoid plaguing a bunch of code with null/error checks and use exceptions. I've read in books that exceptions shouldn't be thrown when validating dating user input. I've seen examples where of where the guideline is broken. One example is the Zend framework. If supplying an invalid controller or action name, an exception is thrown. Unlike doctrine, the user has more direct control over this sort of input. I know you can configure an error controller and set up a 404 message or what have you, but I'm curious why they have used an exception in this scenario? I guess you can argue the Zend Framework does not know how to continue processing the quest. One last example Is I wrote a function to return some html based on a given resource type. This resource type is hard-coded and sent when a user interacts with a web site (such as clicking a button to display the form to input data). I don't expect users to be mucking around with the request type. Under normal operating conditions, the resource type should be valid. To clean up some logic, I was going to throw an exception if a particular form wasn't found. This is mainly to find the correct form associated with a resource type so proper validation can occur. Does this sound like a valid use case for an exception? Right now it's pretty trivial, but I do plan to implement a restful consumer and re-using a function to map resources to their validation services would be very useful. I can then catch the exception and based on the consumer, return an error message suitable for the request type...

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >