Search Results

Search found 4918 results on 197 pages for 'architecture'.

Page 26/197 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Microkernel architectural pattern and applicability for business applications

    - by Pangea
    We are in the business of building customizable web applications. We have the core team that provides what we call as the core platform (provides services like security, billing etc.) on top of which core products are built. These core products are industry specific solutions like telecom, utility etc. These core products are later used by other teams to build customer specific solutions in a particular industry. Until now we have a loose separation between platform and core product. The customer specific solutions are build by customizing 20-40% of the core offering and re-packaging. The core-platform and core products are released together as monolithic apps (ear). I am looking to improvise the current situation so that there is a cleaner separation on these 3. This allows us to have evolve each of these 3 separately etc. I've read through the Mircokernel architecture and kind of felt that I can take apply the principles in my context. But most of my reading about this pattern is always in the context of operating systems or application servers etc. I am wondering if there are any examples on how that pattern was used for architecting business applications. Or you could provide some insight on how to apply that pattern to my problem.

    Read the article

  • UML Diagrams of Multi-Threaded Applications

    - by PersonalNexus
    For single-threaded applications I like to use class diagrams to get an overview of the architecture of that application. This type of diagram, however, hasn’t been very helpful when trying to understand heavily multi-threaded/concurrent applications, for instance because different instances of a class "live" on different threads (meaning accessing an instance is save only from the one thread it lives on). Consequently, associations between classes don’t necessarily mean that I can call methods on those objects, but instead I have to make that call on the target object's thread. Most literature I have dug up on the topic such as Designing Concurrent, Distributed, and Real-Time Applications with UML by Hassan Gomaa had some nice ideas, such as drawing thread boundaries into object diagrams, but overall seemed a bit too academic and wordy to be really useful. I don’t want to use these diagrams as a high-level view of the problem domain, but rather as a detailed description of my classes/objects, their interactions and the limitations due to thread-boundaries I mentioned above. I would therefore like to know: What types of diagrams have you found to be most helpful in understanding multi-threaded applications? Are there any extensions to classic UML that take into account the peculiarities of multi-threaded applications, e.g. through annotations illustrating that some objects might live in a certain thread while others have no thread-affinity; some fields of an object may be read from any thread, but written to only from one; some methods are synchronous and return a result while others are asynchronous that get requests queued up and return results for instance via a callback on a different thread.

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • How should I host our scalable worker processes?

    - by Pieter Breed
    We are designing a new architecture for an enterprise business. The principles we've followed so far is not to develop what you can (possible buy and) deploy, ie, don't reinvent any wheels. In this way we've decided on CQRS, RabbitMQ, Riak and a bunch of other things. We still need to write /some/ business code though and these will be in the form of worker processes, which will consume commands from a message queue and after any side-effects, produce events onto another message queue. The idea behind this is that via the competing-consumers design we will have a scalable design right out of the box. One option is of writing a management infrastructure that will know how to: deploy code instantiate processes kill processes update configuration etc IE provide fault tolerance and scalability. Also, this is exactly what something like GAE and Heroku does for you, but in a public setting and in our organization, public is bad. My question is, is there an out-of-the-box solution that we can use to host our consumers in? Like a private cloud or private platform-as-a-service. Private Heroku or GAE. Is there some kind of software or software product with which we can do all of these things and thereby get scalability and fault tolerance over our consumers?

    Read the article

  • CQRS applicability when some commands need to block the UI

    - by regularfry
    I am working on an app which I would dearly love to transition from a fairly traditional layered architecture to CQRS, for a number of reasons, not least fo which is that having a robust event log will make adding a couple of feature requests I can see barrelling towards me trivial to accomodate. Now, I have a conceptual problem: of around 40 commands the user can initiate, there are three which the user needs to be sure have successfully completed before the UI lets them do anything else. Everything else fits into the "submit a request, query for success later" model, except for these three commands. How is this handled in CQRS-land? Do I separate the three blocking commands to effectively a third service, so I have Commands, Queries, and BlockingCommands? Do I have a two-stage event processor with an in-request blocking first stage which only gets used for the blocking commands? Does the existence of these three commands mean that the whole idea of applying CQRS is invalid? Should I just pretend they aren't blocking and poll for success in the UI? I'm sure this must come up on other projects, how is it usually handled?

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • Example: Cross Cutting Concerns of an Application

    A little while ago I was given an opportunity to design and implement a new system that sent data via an HTTP Post method and then processed the results that were returned so that they could be inserted in to a database. My system had eight core concerns that it needed to fulfill. Eight Core Concerns Database Access Data Entities Worker Result Processing Process Flow Manager Email/Notification Error Handling Logging Of these eight, five were actually cross cutting concerns. 5 Cross Cutting Concerns Database Access Data Entities Email/Notification Error Handling Logging These five cross cutting concerns were determined after I created an aspect oriented model to help identity the system components that could be factored out into separate components.  These separated components would then be included in the system so that they could be used by various other components.  These five components allow all of the other components to access the database, store data, send notifications, handle errors, and log all system events.  Thus, these components are used to share unique aspects to the system via their implementation. The use of Aspect oriented architecture greatly helped me define what components I needed to create and what each of those components could do.  It also showed how all of the other aspects depended on each other so that each component did not have to re-implement code that was already created in the existing system.

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • MVVM and service pattern

    - by alfa-alfa
    I'm building a WPF application using the MVVM pattern. Right now, my viewmodels calls the service layer to retrieve models (how is not relevant to the viewmodel) and convert them to viewmodels. I'm using constructor injection to pass the service required to the viewmodel. It's easily testable and works well for viewmodels with few dependencies, but as soon as I try to create viewModels for complex models, I have a constructor with a LOT of services injected in it (one to retrieve each dependencies and a list of all available values to bind to an itemsSource for example). I'm wondering how to handle multiple services like that and still have a viewmodel that I can unit test easily. I'm thinking of a few solutions: Creating a services singleton (IServices) containing all the available services as interfaces. Example: Services.Current.XXXService.Retrieve(), Services.Current.YYYService.Retrieve(). That way, I don't have a huge constructor with a ton of services parameters in them. Creating a facade for the services used by the viewModel and passing this object in the ctor of my viewmodel. But then, I'll have to create a facade for each of my complexe viewmodels, and it might be a bit much... What do you think is the "right" way to implement this kind of architecture ?

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Implementing Explosions

    - by Xkynar
    I want to add explosions to my 2D game, but im having a hard time with the architecture. Several game elements might be responsible for explosions, like, lets say, explosive barrels and bullets (and there might be chain reactions with close barrels). The only options i can come up with are: 1 - Having an array of explosions and treat them as a game element as important as any other Pros: Having a single array which is updated and drawn with all the other game element arrays makes it more organized and simple to update, and the explosive barrels at a first glance would be easy to create, simply by passing the explosion array as a pointer to each explosive barrel constructor Cons: It might be hard for the bullets to add an explosion to the vector, since bullets are shot by a Weapon class which is located in every mob, so lets say, if i create a new enemy and add it to the enemy array, that enemy will have a weapon and functions to be able to use it, and if i want the weapon (rocket launcher in this case) to have access to the explosions array to be able to add a new one, id have to pass the explosion array as a pointer to the enemy, which would then pass it to the weapon, which would pass it to the bullets (ugly chain). Another problem I can think of is a little more weird: If im checking the collisions between explosions and barrels (so i create a chain reaction) and i detect an explosion colliding with a barrel, if i add a new explosion while im iterating the explosions java will trow an exception. So this is kinda annoying, i cant iterate through the explosions and add a new explosion, i must do it in another way... The other way which isnt really well thought yet is to just add an explosive component to every element that might explode so that when it dies, it explodes or something, but i dont have good ways on implementing this theory either Honestly i dont like either the solutions so id like to know how is it usually done by actual game developers, sorry if my problem seems trivial and dumb.

    Read the article

  • How do OSes work on multiple CPUs? [on hold]

    - by user3691093
    Assumption: "OS es (atleast in some part) should be written in assembly.Assembly programs are CPU specefic." If so how can one os run on different CPUs ? For example: how is that I can load Ubuntu on different systems having different CPUs (like intel i3,i5,i7, amd a8,a6,etc) from the same bootable disk? Does the disk contain seporate assembly programs for each CPU? Are these CPUs 'similar' enough to run the same assembly program? Is my assumption wrong? Something else.... Thanks for responding. I tried to find out in what way are the CPUs that I mentioned 'similar'. I came across the concepts of Instruction Set Architecture and Microarchitecture of CPUs.A CPU will understand a program if it is combatible with its ISA. Even if CPUs are 'wired up' differently (different microarchitecture) , as long as the ISA implemented on top is same ,the program will work. ARM and x86 have different ISA ( that why there are 2 windows 8 versions, right?). And if an app program is written in an HLL with compilers for both platforms we will saved from wasting time writing 2 programs. Did I understand anything wrong? Are there programs that can take a compiled program as input and produce a program executable on another CPU as output? Is it possible? (Virtualisation?) 32 bit windows programs do install on 64 bit windows ,dont they? Arent 64 bit CPUs 'differerent' from 32 bit CPUs? They do get seporate OS versions, right? Is this backward combatibility achieved using programes mentioned in (3) ?

    Read the article

  • What is the most effective approach to learn an unfamiliar complex program? [closed]

    - by bdroc
    Possible Duplicate: How do you dive into large code bases? I have quite a bit of experience with different programming languages and writing small and functional programs for a variety of purposes. My coding skills aren't what I have a problem with. In fact, I've written a decent web application from scratch for my startup. However, I have trouble jumping into unfamiliar applications. What's the most effective way to approach learning a new program's structure and/or architecture so that I can start attacking the code effectively? Are there useful tools for their respective languages (Python and Java are my two primary languages)? Should I be starting with just looking at function names or documentation? How do you veterans approach this problem? I find this has to be with minimal help from coworkers or contributors who are already familiar with the application and have better things to do than help me. I'd love to practice this skill in an open source project so any suggestions for starting points (maybe mildly complex) would be great too!

    Read the article

  • Tales from the Trenches – Building a Real-World Silverlight Line of Business Application

    - by dwahlin
    There's rarely a boring day working in the world of software development. Part of the fun associated with being a developer is that change is guaranteed and the more you learn about a particular technology the more you realize there's always a different or better way to perform a task. I've had the opportunity to work on several different real-world Silverlight Line of Business (LOB) applications over the past few years and wanted to put together a list of some of the key things I've learned as well as key problems I've encountered and resolved. There are several different topics I could cover related to "lessons learned" (some of them were more painful than others) but I'll keep it to 5 items for this post and cover additional lessons learned in the future. The topics discussed were put together for a TechEd talk: Pick a Pattern and Stick To It Data Binding and Nested Controls Notify Users of Successes (and failures) Get an Agent – A Service Agent Extend Existing Controls The first topic covered relates to architecture best practices and how the MVVM pattern can save you time in the long run. When I was first introduced to MVVM I thought it was a lot of work for very little payoff. I've since learned (the hard way in some cases) that my initial impressions were dead wrong and that my criticisms of the pattern were generally caused by doing things the wrong way. In addition to MVVM pros the slides and sample app below also jump into data binding tricks in nested control scenarios and discuss how animations and media can be used to enhance LOB applications in subtle ways. Finally, a discussion of creating a re-usable service agent to interact with backend services is discussed as well as how existing controls make good candidates for customization. I tried to keep the samples simple while still covering the topics as much as possible so if you’re new to Silverlight you should definitely be able to follow along with a little study and practice. I’d recommend starting with the SilverlightDemos.View project, moving to the SilverlightDemos.ViewModels project and then going to the SilverlightDemos.ServiceAgents project. All of the backend “Model” code can be found in the SilverlightDemos.Web project. Custom controls used in the app can be found in the SivlerlightDemos.Controls project.   Sample Code and Slides

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Should I be using a JavaScript SPA designed when security is important

    - by ryanzec
    I asked something kind of similar on stackoverflow with a particular piece of code however I want to try to ask this in a broader sense. So I have this web application that I have started to write in backbone using a Single Page Architecture (SPA) however I am starting to second guess myself because of security. Now we are not storing and sending credit card information or anything like that through this web application but we are storing sensitive information that people are uploading to us and will have the ability to re-download too. The obviously security concern that I have with JavaScript is that you can't trust anything that comes from JavaScript however in a Backbone SPA application, everything is being sent through JavaScript. There are two security features that I will have to build in JavaScript; permissions and authentication. The authentication piece is just me override the Backbone.Router.prototype.navigate method to check the fragment it is trying to load and if the JavaScript application.session.loggedIn is not set to true (and they are not viewing a none authenticated page), they are redirected to the login page automatically. The user could easily modify application.session.loggedIn to equal true (or modify Backbone.Router.prototype.navigate method) but then they would also have to not so easily dynamically embedded a link into the page (or modify a current one) that has the proper classes, data-* attributes, and href values to then load a page that should only be loaded when they user has logged in (and has the permissions). So I have an acl object that deals with the permissions stuff. All someone would have to do to view pages or parts of pages they should not be able to is to call acl.addPermission(resource, permission) with the proper permissions or modify the acl.hasPermission() to always return true and then navigate away and then back to the page. Now certain things is EMCAScript 5 like Object.seal() or Object.freeze() would help with some of this however we have to support IE 8 which does not support those pieces of functionality. Now the REST API also performs security checks on every request so technically even if they are able to see parts of the interface that they should not be able to, they still should not be able to actually affect any data. The main benefits for me in developing a JavaScript SPA application is that the application is a lot more responsive since it is only transferring the minimum amount of JSON data for the requested action and performing the minimum amount of work too. There are also other things that I think are beneficial like you are going to have to develop an API for the data (which is good if you want expand your application to different platforms/technologies) or their is more of a separation between front-end and back-end however if security is a concern, it is really wise to go down the road of a JavaScript SPA application for the front-end?

    Read the article

  • Characteristics of a Web service that promote reusability and change

    Characteristics of a Web service that promote reusability and change:  Standardized Data Exchange Formats (XML, JSON) Standardized communication protocols (Soap, Rest) Promotes Loosely Coupled Systems  Standardized Data Exchange Formats (XML, JSON) XML W3.org defines Extensible Markup Language (XML) as a simplistic text format derived from SGML. XML was designed to solve challenges found in large-scale electronic publishing. In addition,  XML is playing an important role in the exchange of data primarily focusing on data exchange on the web. JSON JavaScript Object Notation (JSON) is a human-readable text-based standard designed for data interchange. This format is used for serializing and transmitting data over a network connection in a structured format. The primary use of JSON is to transmit data between a server and web application. JSON is an alternative to XML. Standardized communication protocols (Soap, Rest) Soap W3Scools.com defines SOAP as a simple XML-based protocol. This protocol lets applications exchange data over HTTP.  SOAP provides a way to communicate between applications running on different operating systems, with different technologies and programming languages. Rest In 2007, Stefan Tilkov defines Representational State Transfer (REST) as a set of principles that outlines how Web standards are supposed to be used.  Using REST in an application will ensure that it exploits the Web’s architecture to its benefit. Promotes Loosely Coupled Systems “Loose coupling as an approach to interconnecting the components in a system or network so that those components, also called elements, depend on each other to the least extent practicable. Coupling refers to the degree of direct knowledge that one element has of another.” (TechTarget.com, 2007) “Loosely coupled system can be easily broken down into definable elements. The extent of coupling in a system can be measured by mapping the maximum number of element changes that can occur without adverse effects. Examples of such changes include adding elements, removing elements, renaming elements, reconfiguring elements, modifying internal element characteristics and rearranging the way in which elements are interconnected.” (TechTarget.com, 2007) References: W3C. (2011). Extensible Markup Language (XML). Retrieved from W3.org: http://www.w3.org/XML/ W3Scools.com. (2011). SOAP Introduction. Retrieved from W3Scools.com: http://www.w3schools.com/soap/soap_intro.asp Tilkov, Stefan. (2007). A Brief Introduction to REST. Retrieved from Infoq.com: http://www.infoq.com/articles/rest-introduction TechTarget.com. (2011). loose coupling. Retrieved from TechTarget.com: http://searchnetworking.techtarget.com/definition/loose-coupling

    Read the article

  • How do you manage extensibility in your multi-tenant systems?

    - by Brian MacKay
    I've got a few big web based multi-tenant products now, and very soon I can see that there will be a lot of customizations that are tenant specific. An extra field here or there, maybe an extra page or some extra logic in the middle of a workflow - that sort of thing. Some of these customizations can be rolled into the core product, and that's great. Some of them are highly specific and would get in everyone else's way. I have a few ideas in mind for managing this, but none of them seem to scale well. The obvious solution is to introduce a ton of client-level settings, allowing various 'features' to be enabled on per-client basis. The downside with that, of course, is massive complexity and clutter. You could introduce a truly huge number of settings, and over time various types of logic (presentation, business) could get way out of hand. Then there's the problem of client-specific fields, which begs for something cleaner than just adding a bunch of nullable fields to the existing tables. So what are people doing to manage this? Force.com seems to be the master of extensibility; obviously they've created a platform from the ground up that is super extensible. You can add on to almost anything with their web-based UI. FogBugz did something similiar where they created a robust plugin model that, come to think of it, might have actually been inspired by Force. I know they spent a lot of time and money on it and if I'm not mistaken the intention was to actually use it internally for future product development. Sounds like the kind of thing I could be tempted to build but probably shouldn't. :) Is a massive investment in pluggable architecture the only way to go? How are you managing these problems, and what kind of results are you seeing? EDIT: It does look as though FogBugz handled the problem by building a fairly robust platform and then using that to put together their screens. To extend it you create a DLL containing classes that implement interfaces like ISearchScreenGridColumn, and that becomes a module. I'm sure it was tremendously expensive to build considering that they have a large of devs and they worked on it for months, plus their surface area is perhaps 5% of the size of my application. Right now I am seriously wondering if Force.com is the right way to handle this. And I am a hard core ASP.Net guy, so this is a strange position to find myself in.

    Read the article

  • Single Responsibility Principle Implementation

    - by Mike S
    In my spare time, I've been designing a CMS in order to learn more about actual software design and architecture, etc. Going through the SOLID principles, I already notice that ideas like "MVC", "DRY", and "KISS", pretty much fall right into place. That said, I'm still having problems deciding if one of two implementations is the best choice when it comes to the Single Responsibility Principle. Implementation #1: class User getName getPassword getEmail // etc... class UserManager create read update delete class Session start stop class Login main class Logout main class Register main The idea behind this implementation is that all user-based actions are separated out into different classes (creating a possible case of the aptly-named Ravioli Code), but following the SRP to a "tee", almost literally. But then I thought that it was a bit much, and came up with this next implementation class UserView extends View getLogin //Returns the html for the login screen getShortLogin //Returns the html for an inline login bar getLogout //Returns the html for a logout button getRegister //Returns the html for a register page // etc... as needed class UserModel extends DataModel implements IDataModel // Implements no new methods yet, outside of the interface methods // Haven't figured out anything special to go here at the moment // All CRUD operations are handled by DataModel // through methods implemented by the interface class UserControl extends Control implements IControl login logout register startSession stopSession class User extends DataObject getName getPassword getEmail // etc... This is obviously still very organized, and still very "single responsibility". The User class is a data object that I can manipulate data on and then pass to the UserModel to save it to the database. All the user data rendering (what the user will see) is handled by UserView and it's methods, and all the user actions are in one space in UserControl (plus some automated stuff required by the CMS to keep a user logged in or to ensure that they stay out.) I personally can't think of anything wrong with this implementation either. In my personal feelings I feel that both are effectively correct, but I can't decide which one would be easier to maintain and extend as life goes on (despite leaning towards Implementation #1.) So what about you guys? What are your opinions on this? Which one is better? What basics (or otherwise, nuances) of that principle have I missed in either design?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >