Search Results

Search found 5410 results on 217 pages for 'n tier architecture'.

Page 61/217 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Learning about the low level

    - by Anoners
    I'm interested in learning more about the PC from a lower (machine) level. I graduated from a school which taught us concepts using the Java language which abstracted out that level almost completely. As a result I only learned a bit from the one required assembly language course. In order to cram in ASM and quite a few details about architecture, it was hard to get a very deep picture of what is going on there. At work I focus on unix socket programming in C, so i'm much closer to the hardware now, but I feel I should learn a bit more about what streams really are, how memory management and paging works, what goes on when you call "paint()" on a graphics buffer, etc. I missed out on a lot of this and i'm looking for a good resource to get me started. I've heard a lot about the "Pink Book" by Peter Norton (Programmer's Guide to the IBM PC, Programmer's Guide to inside the PC, etc). It seems like this is on the right track, however the original is quite out dated and the newer ones have had conflicting reviews, with many people saying to stay away from it. I'm not sure what the SO crowd thinks about this book or if they have some suggestions for similar books, online resources, etc that may be good primers for this sort of thing. Any suggestions would be appreciated.

    Read the article

  • kick off a map reduce job from my java/mysql webapp

    - by Brian
    Hi guys, I need a bit of archecture advice. I have a java based webapp, with a JPA based ORM backed onto a mysql relational database. Now, as part of the application I have a batch job that compares thousands of database records with each other. This job has become too time consuming and needs to be parallelized. I'm looking at using mapreduce and hadoop in order to do this. However, I'm not too sure about how to integrate this into my current architecture. I think the easiest initial solution is to find a way to push data from mysql into hadoop jobs. I have done some initial research on this and found the following relevant information and possibilities: 1) https://issues.apache.org/jira/browse/HADOOP-2536 this gives an interesting overview of some inbuilt JDBC support 2) This article http://architects.dzone.com/articles/tools-moving-sql-database describes some third party tools to move data from mysql to hadoop. To be honest I'm just starting out with learning about hbase and hadoop but I really don't know how to integrate this into my webapp. Any advice is greatly appreciated. cheers, Brian

    Read the article

  • routing mvc on the web

    - by generic_noob
    Hi, I was wondering if anyone could possibly provide me some advice on how i could improve the routing (and/or architecture) to each 'section' of my application. (I'm writing in PHP5, and trying to use strict MVC) Basically, I have a generic index page for the app, and that will spew out boilerplate stuff like jquery and the css etc. and it also generates the main navidation for the entire site, but i'm unsure about the best approach to connect the 'main menu' items(hyperlinks) with their associated controllers. Up until now I have been appending strings into the url and using a 'switch' statement to branch to the correct controller(and view) by extracting the strings back out of '$GET[]' to let it execute the code for the corrosponding action. for instance if i had a basic crud system for customer data, the url to edit a customers details would look like 'www.example.com/index.php?page=customer&action=edit&id=4'. I'm worried that there is a security concern by doing it this way, and i'm not sure of an alternative to branch the main 'index.php' file to the correct controller for each action once the user has clicked the link. Would it be better to use mod_rewrite to disguise the controllers names? or to create a similar system to the ASP MVC framework, where there is a seperate routing system where each url is filtered to get the associated controller? Cheers!

    Read the article

  • Best way to implement plugin framework - are DLLs the only way (C/C++ project)?

    - by Microkernel
    Introduction: I am currently developing a document classifier software in C/C++ and I will be using Naive-Bayesian model for classification. But I wanted the users to use any algorithm that they want(or I want in the future), hence I went to separate the algorithm part in the architecture as a plugin that will be attached to the main app @ app start-up. Hence any user can write his own algorithm as a plugin and use it with my app. Problem Statement: The way I am intending to develop this is to have each of the algorithms that user wants to use to be made into a DLL file and put into a specific directory. And at the start, my app will search for all the DLLs in that directory and load them. My Questions: (1) What if a malicious code is made as a DLL (and that will have same functions mandated by plugin framework) and put into my plugins directory? In that case, my app will think that its a plugin and picks it and calls its functions, so the malicious code can easily bring down my entire app down (In the worst case could make my app as a malicious code launcher!!!). (2) Is using DLLs the only way available to implement plugin design pattern? (Not only for the fear of malicious plugin, but its a generic question out of curiosity :) ) (3) I think a lot of softwares are written with plugin model for extendability, if so, how do they defend against such attacks? (4) In general what do you think about my decision to use plugin model for extendability (do you think I should look at any other alternatives?) Thank you -MicroKernel :)

    Read the article

  • what patern is layerd architechture in asp.net ?

    - by haansi
    Hi, I am a asp.net developer and don't know much about patterns and architecture. I will very thankful if you can please guide me here. In my web applications I use 4 layers. Web site project (having web forms + code behind cs files, user controls + code behind cs files, master pages + code behind cs files) CustomTypesLayer a class library (having custom types, enumerations, DTOs, constructers, get, set and validations) BusinessLogicLayer a class library (having all business logic, rules and all calls to DAL functions) DataAccessLayer a class library( having just classes communicating to database.) -My user interface just calls BusinessLogicLayer. BusinessLogicLayer do proecessign in it self and for data it calls DataAccessLayer funtions. -Web forms do not calls directly DAL. -CustomTypesLayer is shared by all layers. Please guide me is this approach a pattern ? I though it may be MVC or MVP but pages have there code behind files as well which are confusing me. If it is no patren is it near to some patren ? pleaes guide thanks

    Read the article

  • Is it good practise to blank out inherited functionality that will not be used?

    - by Timo Kosig
    I'm wondering if I should change the software architecture of one of my projects. I'm developing software for a project where two sides (in fact a host and a device) use shared code. That helps because shared data, e.g. enums can be stored in one central place. I'm working with what we call a "channel" to transfer data between device and host. Each channel has to be implemented on device and host side. We have different kinds of channels, ordinary ones and special channels which transfer measurement data. My current solution has the shared code in an abstract base class. From there on code is split between the two sides. As it has turned out there are a few cases when we would have shared code but we can't share it, we have to implement it on each side. The principle of DRY (don't repeat yourself) says that you shouldn't have code twice. My thought was now to concatenate the functionality of e.g. the abstract measurement channel on the device side and the host side in an abstract class with shared code. That means though that once we create an actual class for either the device or the host side for that channel we have to hide the functionality that is used by the other side. Is this an acceptable thing to do: public abstract class MeasurementChannelAbstract { protected void MethodUsedByDeviceSide() { } protected void MethodUsedByHostSide() { } } public class DeviceMeasurementChannel : MeasurementChannelAbstract { public new void MethodUsedByDeviceSide() { base.MethodUsedByDeviceSide(); } } Now, DeviceMeasurementChannel is only using the functionality for the device side from MeasurementChannelAbstract. By declaring all methods/members of MeasurementChannelAbstract protected you have to use the new keyword to enable that functionality to be accessed from the outside. Is that acceptable or are there any pitfalls, caveats, etc. that could arise later when using the code?

    Read the article

  • How to structure Javascript programs in complex web applications?

    - by mixedpickles
    Hi there. I have a problem, which is not easily described. I'm writing a web application that makes heavy usage of jquery and ajax calls. Now I don't have experience in designing the architecture for javascript programms, but I realize that my program has not a good structure. I think I have to many identifiers referring to the same (at least more or less) thing. Let's have an exemplary look at an arbitrary UI widget: The eventhandlers use DOM elements as parameters. The DOM element represents a widget in the browser. A lot of times I use jQuery objects (I think they are basically a wrapper around DOM elements) to do something with the widget. Sometimes they are used transiently, sometimes they are stored in a variable for later purposes. The ajax function calls use strings identifiers for these widgets. They are processed server side. Beside that I have a widget class whose instances represents a widget. It is instantiated through the new operator. Now I have somehow four different object identifiers for the same thing, which needs to be kept in sync until the page is loaded anew. This seems not to be a good thing. Any advice?

    Read the article

  • Abstract Design Pattern implementation

    - by Pathachiever11
    I started learning design patterns a while ago (only covered facade and abstract so far, but am enjoying it). I'm looking to apply the Abstract pattern to a problem I have. The problem is: Supporting various Database systems using one abstract class and a set of methods and properties, which then the underlying concrete classes (inheriting from abstract class) would be implementing. I have created a DatabaseWrapper abstract class and have create SqlClientData and MSAccessData concrete class that inherit from the DatabaseWrapper. However, I'm still a bit confused about how the pattern goes as far as implementing these classes on the Client. Would I do the following?: DatabaseWrapper sqlClient = new SqlClientData(connectionString); This is what I saw in an example, but that is not what I'm looking for because I want to encapsulate the concrete classes; I only want the Client to use the abstract class. This is so I can support for more database systems in the future with minimal changes to the Client, and creating a new concrete class for the implementations. I'm still learning, so there might be a lot of things wrong here. Please tell me how I can encapsulate all the concrete classes, and if there is anything wrong with my approach. Many Thanks! PS: I'm very excited to get into software architecture, but still am a beginner, so take it easy on me. :)

    Read the article

  • Should we write detailed architecture design or just an outline when designing a program?

    - by EpsilonVector
    When I'm doing design for a task, I keep fighting this nagging feeling that aside from being a general outline it's going to be more or less ignored in the end. I'll give you an example: I was writing a frontend for a device that has read/write operations. It made perfect sense in the class diagram to give it a read and a write function. Yet when it came down to actually writing them I realized they were literally the same function with just one line of code changed (read vs write function call), so to avoid code duplication I ended up implementing a do_io function with a parameter that distinguishes between operations. Goodbye original design. This is not a terribly disruptive change, but it happens often and can happen in more critical parts of the program as well, so I can't help but wondering if there's a point to design more detail than a general outline, at least when it comes to the program's architecture (obviously when you are specifying an API you have to spell everything out). This might be just the result of my inexperience in doing design, but on the other hand we have agile methodologies which sort of say "we give up on planning far ahead, everything is going to change in a few days anyway", which is often how I feel. So, how exactly should I "use" design?

    Read the article

  • Is a Mission Oriented Architecture (MOA) a better way to describe things than SOA?

    - by Brian Langbecker
    I might sound like a troll, but I would like to seriously understand this deeper. The place I work at has started to use the term MOA, versus SOA as we believe it drives more clarity and want to compare it to the true goals of SOA. A Mission Oriented Architecture is an approach whereby an application is broken down into various business mission elements, with the database, file assets, batch and real time functionality all tightly coupled in terms of delivering that piece of the functionality. The mission allows the developers to focus on a specific piece of functionality to get it right, and to build it with the ability for that piece to scale as an independent entity within the overall application. By tightly coupling the data, file assets and business logic you achieve the goals of working on a very large problem in bite size pieces. Some definitions of SOA mix it up with what is essentially a method call on a web service versus a true "service". As an architect, I have always found it fun getting everyone on the same page regarding SOA. Is it better to call it a "mission" versus a "service"?

    Read the article

  • Error message when running "make" command: /usr/bin/ld: i386 architecture of input file is incompatible with i386:x86-64 output

    - by user784637
    I am unable to create a working executable file by running the make command in a tree previously built on an i386 machine. I'm getting an error message in the form of me@me-desktop:~$ make /usr/bin/ld: i386 architecture of input file `../.. /Lib/libProgram.a(something.o)' is incompatible with i386:x86-64 output I've been told and reassured that this program has been tested and successfully compiled on 64-bit Fedora. I'm running a 64-bit machine me@me-desktop:~$ uname -m x86_64 I'm running Ubuntu 10.04 me@me-desktop:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.3 LTS Release: 10.04 Codename: lucid I'm using g++ # me@me-desktop:~$ g++ --version g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3 Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I'm also using libtool # me@me-desktop:~$ libtool --version ltmain.sh (GNU libtool) 2.2.6b Written by Gordon Matzigkeit <[email protected]>, 1996 Any clues as to what is going wrong?

    Read the article

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • How to remove all associated files and configuration settings of an app installed through 'force architecture' command

    - by Mysterio
    A few weeks ago I installed a 32 bit .deb file through the 'force architecture' command (on my 64bit notebook), however the procedure was unsuccessful and I used the apt-get purgecommand to uninstall the app. It seems there are some leftovers of the app I uninstalled which has now broken system update. Synaptic recommended a sudo apt-get install -fwhich I did in the terminal with this initial response: Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libntfs10 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: crossplatformui 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? I chose 'Y' then got this response: (Reading database ... 187616 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) It seems the app I installed crossplatformuiis still on my system and has caused update manager to stop running with a partial upgrade warning. What do I do now?

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • what is the best practice approach for n-tier application development with entity framework?

    - by samsur
    I am building an application using entity framework. I am using the T4 template to generate self tracking entities. Currently, I am thinking of creating the entity framework code in a separate project. In this same project, I would have partial classes with additional methods for the entities. I am thinking of creating a separate project for a service layer (WCF) with methods for the upper/presentation tier. The WCF layer will reference the entity framework project. The methods in the WCF layer will return the entities or accept the entities as the parameters. I am thinkg of creating a third project for the presentation layer (ASP.net), this will make calls to the WCF service but will also need to reference the entities as the WCF methods take these types as the parameters/return types. In short, i want to use the STE entities generated by the T4 template as a DTO to be used in all layers. I was originally thinking of creating a business logic layer that maps to each entities. Example: If i have a customer class, the Business Layer would have a CustomerBLL class and then methods in the customerBLL will be used by the service layer. I was also trying to create a DTO in this business layer. I however found that this approach is very time consuming and i do not see a major benefit as it would create more maintenance work. What is the best practice for n-tier application development using entity framework 4?

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • Is there any kind of established architecture for browser based games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that? EDIT: After reading the first answers I am under the impression of having made a mistake when including the "MMO" part into the title. The game will not be all fancy (i.e. 3D or such) on the client side and the logic will completely exist on the server. That is, apart from basic form validation for the user which will also be mirrored on the server side. So the target toolset will be HTML5, JavaScript, probably JQuery(UI). My question is more related to the software architecture/design of a system that enforces certain rules. Separation of ruleset and presentation One problem I am having is that I want to separate the game rules from the presentation. The first step would be to make an own module for the game "engine" that only exposes an interface that allows all actions to be taken in a clean way. If an action fails with regard to some pre/post condition, the engine throws an exception which is then presented to the user like "you cannot sell something you do not own" or "after that you would end up in a situation which is not a valid game state." The problem here is that I would like to be able to not even present invalid action in the first place or grey out the corresponding UI elements. Changing and tweaking the ruleset Another big thing is the ruleset. It will probably evolve over time and most definitely must be tweaked. What's more, it should be possible (to a certain extent) to build a ruleset that fits a specific game round, i.e. choosing different kinds of behaviours in different aspects of the game. This would do something like "we play it with extension A today but we throw out extension B." For me, this screams "Architectural/Design pattern" but I have no idea on who might have published on something like this, not even what to google for.

    Read the article

  • Why do I get "file is not of required architecture" when I try to build my app on an iphone?

    - by Dale
    My app seemingly runs fine in the simulator but the first time I hooked a phone up to my system and had it build for it I got a huge error log with things like: Build SCCUI of project SCCUI with configuration Debug CompileXIB HandleAlert.xib cd /Users/gdbriggs/Desktop/SCCUI setenv IBC_MINIMUM_COMPATIBILITY_VERSION 3.1 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr /bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/usr/bin/ibtool --errors --warnings --notices --output-format human-readable-text --compile /Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos/SCCUI.app/HandleAlert.nib /Users/gdbriggs/Desktop/SCCUI/HandleAlert.xib /* com.apple.ibtool.document.warnings */ /Users/gdbriggs/Desktop/SCCUI/HandleAlert.xib:13: warning: UITextView does not support data detectors when the text view is editable. Ld build/Debug-iphoneos/SCCUI.app/SCCUI normal armv6 cd /Users/gdbriggs/Desktop/SCCUI setenv IPHONEOS_DEPLOYMENT_TARGET 3.1 setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.sdk -L/Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos -F/Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos -F/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks -filelist /Users/gdbriggs/Desktop/SCCUI/build/SCCUI.build/Debug-iphoneos/SCCUI.build/Objects-normal/armv6/SCCUI.LinkFileList -mmacosx-version-min=10.5 -dead_strip -miphoneos-version-min=3.1 -framework Foundation -framework UIKit -framework CoreGraphics -framework MessageUI -o /Users/gdbriggs/Desktop/SCCUI/build/Debug-iphoneos/SCCUI.app/SCCUI ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/Foundation.framework/Foundation, file is not of required architecture ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/UIKit.framework/UIKit, file is not of required architecture ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics, file is not of required architecture ld: warning: in /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.sdk/System/Library/Frameworks/MessageUI.framework/MessageUI, file is not of required architecture Undefined symbols: "_OBJC_CLASS_$_UIDevice", referenced from: __objc_classrefs__DATA@0 in SCAuthenticationHandler.o "_OBJC_CLASS_$_NSString", referenced from: __objc_classrefs__DATA@0 in CCProxy.o __objc_classrefs__DATA@0 in AlertSummaryViewController.o __objc_classrefs__DATA@0 in HomeLevelController.o __objc_classrefs__DATA@0 in SCAuthenticationHandler.o __objc_classrefs__DATA@0 in SCRequestHandler.o "_UIApplicationMain", referenced from: _main in main.o "_objc_msgSend", referenced from: _main in main.o _main in main.o _main in main.o -[SCCUIAppDelegate applicationDidFinishLaunching:] in and it just keeps going. At / near the bottom it says: ld: symbol(s) not found collect2: ld returned 1 exit status What am I doing wrong?

    Read the article

  • How to start with entity framework and service oriented architecture?

    - by citronas
    At work I need to create a new web application, that will connection to an MySql Database. (So far I only have expercience with Linq-To-Sql classes and MSSQL Servers.) My superior tells me to use the entity framework (he probably refers to Linq-To-Entity) and provide everything as a service based architecture. Unfortunatly nobody at work has experience with that framework and with a real nice server oriented architecture. (till now no customer wanted to pay for architecture that he can't see. This speficic project I'm leading will be long-term, meaning multiple years, so it would be best to design it the way, that multiple targetting plattforms like asp.net, c# wpf, ... could use it) For now, the main target plattform is ASP.net So I do have the following questions: 1) Where can I read best what's really behind service oriented architecture (but for now beginner tutorials work fine as well) and how to do it in best practise? 2) So far I can't seem a real difference between Linq-To-Sql classes and the information I've google so far on the 'entity framework'. So, whats the difference? Where do I find nice tutorials for it. 3) Is there any difference in the entity framework regarding the database server (MSSQL or MySQL). If not, does that mean that code snipperts I will stumble across will word database independent? 4) I do you Visual Studio 2010. Do I have to regard something specific?

    Read the article

  • Would you use IdeaBlade with DevExpress controls for an N-Tier system?

    - by ERaubenheimer
    I’ve worked on numerous projects where we’ve developed our own frameworks and platforms from scratch and it was never really successful and I’m re-evaluating to rather use a commercial product to assist us with our product development. If you get a chance to develop an N-Tier system with a SOA layer from scratch would you recommend IdeaBlade with DevExpress? If not what other combinations would you recommend? Requirements: - SOA Layer - Business components - DAL with database independency as optional - Developer support - Easy upgradable - .NET - No Royalties

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >