Search Results

Search found 14838 results on 594 pages for 'architecture rest json s'.

Page 11/594 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • What should "Solution Architecture" document describe ?

    - by anjanb
    We're going to build a Solution which includes acquiring data through mobile phones(J2ME) and laptops(browser based data acquisition), uploading the same data to back-end servers(built with J2EE) and then analyzing the same data including generating various types of reports. This solution will include a CMS for building the website and various interfaces for various types of users. I'm to do a "Solutions Architecture" document for the same. What should that document consist of ? Are there any templates (WORD, .ODT, .PDF) ? Any inputs appreciated. Thank you so much,

    Read the article

  • Native JSON-Unterstützung in der Datenbank 12.1.0.2

    - by Carsten Czarski
    Mit dem im Juli 2014 erschienenen Patchset 12.1.0.2 wird die Oracle-Datenbank erstmals mit nativer Unterstützung für JSON ausgestattet: So ist es unter anderem nun möglich ... JSON in Tabellen zu speichern und zu validieren Daten aus JSON-Dokumenten zu extrahieren "Relationale Sichten" auf JSON-Dokumente zu generieren In Tabellen gespeicherte JSON-Dokumente zu indizieren In diesem Tipp erfahren Sie, wie Sie die neuen SQL/JSON-Funktionen (nicht nur mit APEX) praktisch ausnutzen können. Mit diesen neuen SQL-Funktionen wird die erste Hälfte der JSON-Unterstützung in der Oracle-Datenbank - auf SQL-Ebene - umgesetzt. Das mit APEX 5.0 bereits angekündigte PL/SQL Paket APEX_JSON wird die zweite Hälfte, den PL/SQL-Bereich, abdecken - so dass Ihnen spätestens mit APEX 5.0 auf 12.1.0.2 eine komplett JSON-fähige Datenbank zur Verfügung stehen wird. Mehr Information auf dem Event "Development meets Customers" Am 17. September können Sie in Frankfurt noch mehr zu JSON und Oracle 12c erfahren: In der halbtägigen Veranstaltung Oracle Development meets Customers "JSON und die Oracle Database 12c" haben Sie die einmalige Chance, aktuellste Informationen zur neuen JSON-Unterstützung in Oracle12c - direkt aus dem Entwicklungsteam - zu erhalten. Darüber hinaus werden weitere, aktuelle Entwicklerthemen wie node.js, REST-Webservices und andere behandelt. Beda Hammerschmidt, einer der führenden Entwickler der JSON-Datenbank, wird die JSON-Funktionen vorstellen und Tipps & Tricks zu deren Einsatz verraten. Lassen Sie sich diese Gelegenheit nicht entgehen - melden Sie sich gleich an.

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • Basic web architecture : Perl -> PHP

    - by Sunny Jim
    This is an architecture question. If there is a better forum, please redirect me. Apologies in advance. Essentially every website is built around a relational database, right? When a user uploads form data, that data is stored in a table. The problem is that the table structure(s) need to be modified whenever the website form is modified. Although I understand that modern web frameworks work around this problem by automatically building forms based on the table structure. For the last 20 years, I have been building websites using Perl. When I first encountered this problem, the easiest solution was to save serialized Perl objects as data BLOBS. After XML's introduction, this solution worked even better because XML is so effective for representing arbitrary data. This approach is consistent with the original Perl principles of Hubris, Laziness, and Impatience and I'm pretty committed to it. Obviously, the biggest drawback is that this solution locks me into the Perl interpreter. So instead, I've just completed a prototype of a universal RDB table. The prototype is written in Perl but porting it to PHP will be a good chance to develop those skills. The principal is based on the XML::Dumper module, which converts arbitrary Perl data structures into uniform XML. With my approach, each XML node is stored as a table record. I underestimated this undertaking and rolled something up myself. But the effort allows me to discuss the basic design instead of implementation details. As mentioned, I'm pretty committed to this approach of using flexible data structures. It's been successfully deployed on many websites, large, and complex. But are there any drawbacks I've overlooked? I rolled my own. Are other people taking a similar approach to their data? What kinds of solutions are available? I have not abandoned my dream of eventually contributing something useful to the worldwide community. In order to proceed, the next step would be peer review. How does one pursue that effort? Thanks! -Jim

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Syncing client and server CRUD operations using json and php

    - by Justin
    I'm working on some code to sync the state of models between client (being a javascript application) and server. Often I end up writing redundant code to track the client and server objects so I can map the client supplied data to the server models. Below is some code I am thinking about implementing to help. What I don't like about the below code is that this method won't handle nested relationships very well, I would have to create multiple object trackers. One work around is for each server model after creating or loading, simply do $model->clientId = $clientId; IMO this is a nasty hack and I want to avoid it. Adding a setCientId method to all my model object would be another way to make it less hacky, but this seems like overkill to me. Really clientIds are only good for inserting/updating data in some scenarios. I could go with a decorator pattern but auto generating a proxy class seems a bit involved. I could use a generic proxy class that uses a __call function to allow for original object data to be accessed, but this seems wrong too. Any thoughts or comments? $clientData = '[{name: "Bob", action: "update", id: 1, clientId: 200}, {name:"Susan", action:"create", clientId: 131} ]'; $jsonObjs = json_decode($clientData); $objectTracker = new ObjectTracker(); $objectTracker->trackClientObjs($jsonObjs); $query = $this->em->createQuery("SELECT x FROM Application_Model_User x WHERE x.id IN (:ids)"); $query->setParameters("ids",$objectTracker->getClientSpecifiedServerIds()); $models = $query->getResults(); //Apply client data to server model foreach ($models as $model) { $clientModel = $objectTracker->getClientJsonObj($model->getId()); ... } //Create new models and persist foreach($objectTracker->getNewClientObjs() as $newClientObj) { $model = new Application_Model_User(); .... $em->persist($model); $objectTracker->trackServerObj($model); } $em->flush(); $resourceResponse = $objectTracker->createResourceResponse(); //Id mappings will be an associtave array representing server id resources with client side // id. //This method Dosen't seem to flexible if we want to return additional data with each resource... //Would have to modify the returned data structure, seems like tight coupling... //Ex return value: //[{clientId: 200, id:1} , {clientId: 131, id: 33}];

    Read the article

  • JavaScript - Building JSON object

    - by user208662
    Hello, I'm trying to understand how to build a JSON object in JavaScript. This JSON object will get passed to a JQuery ajax call. Currently, I'm hard-coding my JSON and making my JQuery call as shown here: $.ajax({ url: "/services/myService.svc/PostComment", type: "POST", contentType: "application/json; charset=utf-8", data: '{"comments":"test","priority":"1"}', dataType: "json", success: function (res) { alert("Thank you!"); }, error: function (req, msg, obj) { alert("There was an error"); } }); This approach works. But, I need to dynamically build my JSON and pass it onto the JQuery call. However, I cannot figure out how to dynamically build the JSON object. Currently, I'm trying the following without any luck: var comments = $("#commentText").val(); var priority = $("#priority").val(); var json = { "comments":comments,"priority":priority }; $.ajax({ url: "/services/myService.svc/PostComment", type: "POST", contentType: "application/json; charset=utf-8", data: json, dataType: "json", success: function (res) { alert("Thank you!"); }, error: function (req, msg, obj) { alert("There was an error"); } }); Can someone please tell me what I am doing wrong? I noticed that with the second version, my service is not even getting reached. Thank you

    Read the article

  • Can I return JSON from an .asmx Web Service if the ContentType is not JSON?

    - by Click Ahead
    I would like to post a form using ajax and jquery to a .asmx webservice and return the value from the webservice as JSON. I'm using ASP.NET 4.0. I know that in order to return JSON from a webservice the following needs to be set (1) dataType: "json" (2) contentType: "application/json; charset=utf-8", (3) type: "POST" (4) set the Data to something. I have tested this and it works fine (i.e. my webservice returns the data as JSON) if all *four are set*. But, lets say in my case I want to do a standard form post i.e. test1=value1&test2=value2 so the contentType is not JSON but I want back JSON {test1:value1}. This doesn't seem to work because the contentType is "application/x-www-form-urlencoded" not "application/json; charset=utf-8". Can anyone tell me why I can't do this? It seems crazy to me that you have to explicitly send JSON to get JSON back, yet if you don't use JSON (i.e. post urlencoded contenttype) then the webservice will return XML. Any insights are greatly appreciated :)

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • REST from asp.net 2.0

    - by weslleywang
    Hi, I just built a asp.net 2.0 web site. Now I need add REST web service so I can communicate with another web application. I've worked with 2 SOAP web service project before, but have no experise with REST at all. I guess only a coupleweeks would works fine. after googling, I found it's not that easy. This is what I found: There is NO REST out of box of asp.net. WCF REST Starter Kit Codeplex Preview 2 base on .net 3.5 and still in beta Rest ASP.NET Example REST Web Services in ASP.NET 2.0 (C#) Exyus Handling POST and PUT methods with Lullaby ADO.NET Data Service ... Now my question, a) Is a REST solution for .net 2.0? if yes, which one is best solution? b) if I have to, how hard to migrate my asp.net from 2.0 to 3.5? is it as simple as just compile, or I have to change a lot code? c) WCF REST Starter Kit is good enough to use in production? d) Do I have to learn WCF first, then WCF REST Starter Kit? where is the best place to start? I appreciate any help here. Thanks Wes

    Read the article

  • Agile SOA Governance: SO-Aware and Visual Studio Integration

    - by gsusx
    One of the major limitations of traditional SOA governance platforms is the lack of integration as part of the development process. Tools like HP-Systinet or SOA Software are designed to operate by models on which the architects dictate the governance procedures and policies and the rest of the team members follow along. Consequently, those procedures are frequently rejected by developers and testers given that they can’t incorporate it as part of their daily activities. Having SOA governance products...(read more)

    Read the article

  • MVC2 JSON action, if I want to be RESTful should I allow GET, POST, or Both?

    - by Yads
    The project I'm currently working has a whole bunch of JSON actions in order to populate cascading dropdowns via ajax calls. Since they're technically Select queries and we're trying to be RESTful, we've been marking these actions with the HttpGet attributes. However by default, JsonResultdoes not allow to return results via a GET. So we've had to explicitly call Json(data, JsonRequestBehavior.AllowGet). What I'm wondering is, is this bad practice? Should we only be allowing Post requests to our Json actions? If it makes a difference, this is an enterprise application, that requires a log in to a particular environment before it can be accessed.

    Read the article

  • Best development architecture for a small team of programmers

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with LAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with LAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards...

    Read the article

  • MVC Architecture

    Model-View-Controller (MVC) is an architectural design pattern first written about and implemented by  in 1978. Trygve developed this pattern during the year he spent working with Xerox PARC on a small talk application. According to Trygve, “The essential purpose of MVC is to bridge the gap between the human user's mental model and the digital model that exists in the computer. The ideal MVC solution supports the user illusion of seeing and manipulating the domain information directly. The structure is useful if the user needs to see the same model element simultaneously in different contexts and/or from different viewpoints.”  Trygve Reenskaug on MVC The MVC pattern is composed of 3 core components. Model View Controller The Model component referenced in the MVC pattern pertains to the encapsulation of core application data and functionality. The primary goal of the model is to maintain its independence from the View and Controller components which together form the user interface of the application. The View component retrieves data from the Model and displays it to the user. The View component represents the output of the application to the user. Traditionally the View has read-only access to the Model component because it should not change the Model’s data. The Controller component receives and translates input to requests on the Model or View components. The Controller is responsible for requesting methods on the model that can change the state of the model. The primary benefit to using MVC as an architectural pattern in a project compared to other patterns is flexibility. The flexibility of MVC is due to the distinct separation of concerns it establishes with three distinct components.  Because of the distinct separation between the components interaction is limited through the use of interfaces instead of classes. This allows each of the components to be hot swappable when the needs of the application change or needs of availability change. MVC can easily be applied to C# and the .Net Framework. In fact, Microsoft created a MVC project template that will allow new project of this type to be created with the standard MVC structure in place before any coding begins. The project also creates folders for the three key components along with default Model, View and Controller classed added to the project. Personally I think that MVC is a great pattern in regards to dealing with web applications because they could be viewed from a myriad of devices. Examples of devices include: standard web browsers, text only web browsers, mobile phones, smart phones, IPads, IPhones just to get started. Due to the potentially increasing accessibility needs and the ability for components to be hot swappable is a perfect fit because the core functionality of the application can be retained and the View component can be altered based on the client’s environment and the View component could be swapped out based on the calling device so that the display is targeted to that specific device.

    Read the article

  • Using Freepascal\Lazarus JSON Libraries

    - by Gizmo_the_Great
    I'm hoping for a bit of a "simpletons" demo\explanation for using Lazarus\Freepascal JSON parsing. I've asked a question here but all the replies are "read this" and none of them are really helping me get a grasp because the examples are bit too in-depth and I'm seeking a very simple example to help me understand how it works. In brief, my program reads an untyped binary file in chunks of 4096 bytes. The raw data then gets converted to ASCII and stored in a string. It then goes through the variable looking for certain patterns, which, it turned out, are JSON data structures. I've currently coded the parsing the hard way using Pos and ExtractANSIString etc. But I'vesince learnt that there are JSON libraries for Lazarus & FPC, namely fcl-json, fpjson, jsonparser, jsonscanner etc. https://bitbucket.org/reiniero/fpctwit/src http://fossies.org/unix/misc/fpcbuild-2.6.0.tar.gz:a/fpcbuild-2.6.0/fpcsrc/packages/fcl-json/src/ http://svn.freepascal.org/cgi-bin/viewvc.cgi/trunk/packages/fcl-json/examples/ However, I still can't quite work out HOW I read my string variable and parse it for JSON data and then access those JSON structures. Can anyone give me a very simple example, to help getting me going? My code so far (without JSON) is something like this: try SourceFile.Position := 0; while TotalBytesRead < SourceFile.Size do begin BytesRead := SourceFile.Read(Buffer,sizeof(Buffer)); inc(TotalBytesRead, BytesRead); StringContent := StripNonAsciiExceptCRLF(Buffer); // A custom function to strip out binary garbage leaving just ASCII readable text if Pos('MySearchValue', StringContent) > 0 then begin // Do the parsing. This is where I need to do the JSON stuff ...

    Read the article

  • ajax upload cannot process JSON response or gives download popup

    - by Jorre
    I'm using the AJAX plugin from Andris Valums: AJAX Upload ( http://valums.com/ajax-upload/ ) Copyright (c) Andris Valums It works great, except for the fact that I cannot send proper JSON as a response. I'm setting the headers to 'Content-Type', 'application/json' before sending the JSON-encoded response, and in the plugin I'm saying that I'm expecting JSON: responseType: "json", This gives me a download popup asking to download the JSON/REPONSE file. The strange thing is, when I don't ass the correct "Content-Type" to my response, it works. Of course I want to pass the correct response type, because all my jQuery 1.4 calls are depending on correct JSON. Does anyone else have had this same problem or is there anyone out there willing to try this out ? I'd love to use this plugin but only when I can return proper JSON with the correct content-type Thanks for all your help!

    Read the article

  • Best Practices for MVC Architecture

    - by Mystere Man
    There are a number of questions on StackOverflow regard MVC best practices, but most of those seem to revolve around things like using Dependancy Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer aplication what are some of the best practices there in regards to MVC? This question is one-part standard q-a, but another part best-practices, so it could go either here or programmers.se, I am marking it CW. If you feel it would be better suited to programmers.se then it can be migrated. EDIT: What happened to the Community Wiki option? It seems to be gone.

    Read the article

  • Architecture strategies for a complex competition scoring system

    - by mikewassmer
    Competition description: There are about 10 teams competing against each other over a 6-week period. Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. There are about 20 different types of scoring elements. Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. There will be a quasi-real-time dashboard that will show current scores and raw data inputs. Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. There will be a "scorekeeper UI" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of "diff" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial "bottom line" calcs may be calculated from 250,000 or more transactions. Should I be making heavy use of caching for this application? Are there any obvious approaches or similar case studies that I may be overlooking?

    Read the article

  • SharePoint 2010 Workflow for Multiple Items (Architecture)

    - by erobillard
    I had the question today of whether SharePoint 2010 supports workflow on multiple items, since Groove's workflow apparently supported multiple items and that model disappeared when Groove Workspaces were amalgamated into SharePoint Sites and SharePoint Workspace (the client utility). It's a great question, the short answer is that yes, it's possible. You could brute-force it in 2007 and that strategy should still carry over to 2010, and 3 new features (that I can think of) support multi-item scenarios...(read more)

    Read the article

  • Best development architecture for a small team of programmers ( WAMP Stack )

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with WAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with WAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards... Edit: I'm sorry I made a mistake and switched WAMP with LAMP, sorry about that..

    Read the article

  • Collision Detection Game Design and Architecture

    - by Chompas
    I've reading some articles about collision detection. My question here is about ideas on the design for it. Baically I have a C++ game that has a main loop with entities with an update method. Based on keyboard input, these characters updates their positions. My question is not about how to detect collisions, it's about getting ideas in which is the best way to implement this. The game has a main character but also enemies that have to collide between them, so I'm not sure where to make all the iterations for checking collisions and if the right way is to check everything against everything. Thanks in advance.

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • Role of systems in entity systems architecture

    - by bio595
    I've been reading a lot about entity components and systems and have thought that the idea of an entity just being an ID is quite interesting. However I don't know how this completely works with the components aspect or the systems aspect. A component is just a data object managed by some relevant system. A collision system uses some BoundsComponent together with a spatial data structure to determine if collisions have happened. All good so far, but what if multiple systems need access to the same component? Where should the data live? An input system could modify an entities BoundsComponent, but the physics system(s) need access to the same component as does some rendering system. Also, how are entities constructed? One of the advantages I've read so much about is flexibility in entity construction. Are systems intrinsically tied to a component? If I want to introduce some new component, do I also have to introduce a new system or modify an existing one? Another thing that I've read often is that the 'type' of an entity is inferred by what components it has. If my entity is just an id how can I know that my robot entity needs to be moved or rendered and thus modified by some system? Sorry for the long post (or at least it seems so from my phone screen)!

    Read the article

  • .Net search engine architecture and technology choice

    - by shrivb
    I am in the process of designing a search engine for an asp.net site. The site currently uses Microsoft Indexing Server to index and search content which range from simple text files to MS documents to PDFs. MIS is also used to crawl File servers. MIS in tandem with Index Server Companion crawls for content from external sites. I intend to replace MIS with the indexer/crawler I am trying to build. Since my platform is completely on the Microsoft stack, I cant afford to have a Java application server. Thus, Solr, and effectively, SolrNet is ruled out. With this being the context, I have couple of questions. 1.Technology choice I had done my initial investigation and looked at Lucene.Net. There seemed to be 2 issues in using Lucene.Net. First being, it cant crawl external content. There doesn't seem to be a direct port of Nutch in .Net. Second, since it is just an indexer, it cant parse various document types. The parsing is left to the developer. So, what would be best technology choice on the .Net platform to achieve indexing & crawling? Are there any .Net open source libraries available for document parsing? 2.Architectural pattern Is there any general architectural pattern or best practice that needs to be followed in designing such a search engine? Thanks in advance.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >