Search Results

Search found 752 results on 31 pages for 'restful'.

Page 25/31 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • What does the Spring framework do? Should I use it? Why or why not?

    - by sangfroid
    So, I'm starting a brand-new project in Java, and am considering using Spring. Why am I considering Spring? Because lots of people tell me I should use Spring! Seriously, any time I've tried to get people to explain what exactly Spring is or what it does, they can never give me a straight answer. I've checked the intros on the SpringSource site, and they're either really complicated or really tutorial-focused, and none of them give me a good idea of why I should be using it, or how it will make my life easier. Sometimes people throw around the term "dependency injection", which just confuses me even more, because I think I have a different understanding of what that term means. Anyway, here's a little about my background and my app : Been developing in Java for a while, doing back-end web development. Yes, I do a ton of unit testing. To facilitate this, I typically make (at least) two versions of a method : one that uses instance variables, and one that only uses variables that are passed in to the method. The one that uses instance variables calls the other one, supplying the instance variables. When it comes time to unit test, I use Mockito to mock up the objects and then make calls to the method that doesn't use instance variables. This is what I've always understood "dependency injection" to be. My app is pretty simple, from a CS perspective. Small project, 1-2 developers to start with. Mostly CRUD-type operations with a a bunch of search thrown in. Basically a bunch of RESTful web services, plus a web front-end and then eventually some mobile clients. I'm thinking of doing the front-end in straight HTML/CSS/JS/JQuery, so no real plans to use JSP. Using Hibernate as an ORM, and Jersey to implement the webservices. I've already started coding, and am really eager to get a demo out there that I can shop around and see if anyone wants to invest. So obviously time is of the essence. I understand Spring has quite the learning curve, plus it looks like it necessitates a whole bunch of XML configuration, which I typically try to avoid like the plague. But if it can make my life easier and (especially) if make it can make development and testing faster, I'm willing to bite the bullet and learn Spring. So please. Educate me. Should I use Spring? Why or why not?

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • Understanding exceptional cases

    - by Justin
    I've been studying the use of exceptions in various php projects (such as Doctrine and Zend Framework). Exceptions seem to be thrown when unordinary input/state occurs. A perfect example is Doctrine throwing an exception when you try to use a invalid query string. I think the creators of the doctrine api understood that first, you can't query data by using an invalid DQL statement, and a developer should immediately be warned that an error has occurred, rather then letting execution continue with the possibility of an error code going un-checked. I also bet that this simplifies reading the code. I can't think of a situation where you would want to use an invalid DQL statement, except unit testing. Since this is true, it's better to avoid plaguing a bunch of code with null/error checks and use exceptions. I've read in books that exceptions shouldn't be thrown when validating dating user input. I've seen examples where of where the guideline is broken. One example is the Zend framework. If supplying an invalid controller or action name, an exception is thrown. Unlike doctrine, the user has more direct control over this sort of input. I know you can configure an error controller and set up a 404 message or what have you, but I'm curious why they have used an exception in this scenario? I guess you can argue the Zend Framework does not know how to continue processing the quest. One last example Is I wrote a function to return some html based on a given resource type. This resource type is hard-coded and sent when a user interacts with a web site (such as clicking a button to display the form to input data). I don't expect users to be mucking around with the request type. Under normal operating conditions, the resource type should be valid. To clean up some logic, I was going to throw an exception if a particular form wasn't found. This is mainly to find the correct form associated with a resource type so proper validation can occur. Does this sound like a valid use case for an exception? Right now it's pretty trivial, but I do plan to implement a restful consumer and re-using a function to map resources to their validation services would be very useful. I can then catch the exception and based on the consumer, return an error message suitable for the request type...

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Setting a time limit for a transaction in MySQL/InnoDB

    - by Trevor Burnham
    This sprang from this related question, where I wanted to know how to force two transactions to occur sequentially in a trivial case (where both are operating on only a single row). I got an answer—use SELECT ... FOR UPDATE as the first line of both transactions—but this leads to a problem: If the first transaction is never committed or rolled back, then the second transaction will be blocked indefinitely. The innodb_lock_wait_timeout variable sets the number of seconds after which the client trying to make the second transaction would be told "Sorry, try again"... but as far as I can tell, they'd be trying again until the next server reboot. So: Surely there must be a way to force a ROLLBACK if a transaction is taking forever? Must I resort to using a daemon to kill such transactions, and if so, what would such a daemon look like? If a connection is killed by wait_timeout or interactive_timeout mid-transaction, is the transaction rolled back? Is there a way to test this from the console? Clarification: innodb_lock_wait_timeout sets the number of seconds that a transaction will wait for a lock to be released before giving up; what I want is a way of forcing a lock to be released. Update: Here's a simple example that demonstrates why innodb_lock_wait_timeout is not sufficient to ensure that the second transaction is not blocked by the first: START TRANSACTION; SELECT SLEEP(55); COMMIT; With the default setting of innodb_lock_wait_timeout = 50, this transaction completes without errors after 55 seconds. And if you add an UPDATE before the SLEEP line, then initiate a second transaction from another client that tries to SELECT ... FOR UPDATE the same row, it's the second transaction that times out, not the one that fell asleep. What I'm looking for is a way to force an end to this transaction's restful slumber.

    Read the article

  • nginx rewrite regex for API versioning

    - by MSpreij
    What I want is for the first to be turned into the second.. /widget => /widget/index.php /widget/ => /widget/index.php /widget?act=list => /widget/index.php?act=list /widget/?act=list => /widget/index.php?act=list /widget/list => /widget/index.php?act=list /widget/v2?act=list => /widget/v2.php?act=list /widget/v2/?act=list => /widget/v2.php?act=list /widget/v2/list => /widget/v2.php?act=list v2 could also be v45, basically "v\d+" act, "list" in this case, can have many values and more will be added. Any additional query parameters would just be passed on with $args, I guess. Basically URLs not specifying the version will go to index.php, which can then decide what specific version file to include. What I am afraid of happening is loops - this should sit in location /widget { right?. (As for putting the version of the API in the URL, I'm not trying to be RESTful, and target audience is small) Resources on how to do this entirely in index.php using "routers" also welcome :-/

    Read the article

  • nginx & php-fpm and custom header

    - by nixer
    I would like to pass some custom header (ACCESS_TOKEN) from client RESTful application (JS) to application server (php-fpm). I had read that nginx should pass all http headers to php, but somehow it does not come to my php :( I can see it in firebug http://o7.no/N6DM7q but can't see it in $_SERVER variable. it just does not exist in $_SERVER array. I'm thinking that i need to pass it manually. Now my config looks like that: location @php-fpm { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_param REQUEST_URI /index.php$request_uri; fastcgi_param SCRIPT_FILENAME /htdocs/index.php; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /htdocs; } } and when I add new line in location definition: location @php-fpm { include /etc/nginx/fastcgi_params; ... fastcgi_param ACCESS_TOKEN $http_access_token; } } or even if i will add it into fastcgi_params file it does not help :( if I put into location part next line: fastcgi_param ACCESS_TOKEN $http_access_token; then in php it has empty value :( how I can pass custom header from client to backend (php) via nginx ?

    Read the article

  • Java: very slow tomcat and too big war file

    - by NaN
    I created some sort of RESTful API backend for a mobile app. It's written completely in Java using Jersey as Framework. At the moment no database is used, it's all in the memory, but this is no problem so far (it's only for prototyping purposes). I ordered the smallest package from digital ocean and installed tomcat7. All in all tomcat works, but I have three major problems: 1) It takes a long time until tomcat deploys the app: I deploy it per tomcat manager and it takes about 2 minutes unit the site works (excl. war upload time). 2) The war files are quite big (16MB): I don't know why they are so big. There are no database dependencies and most logic is written in plain java. Okay, we are using jersey, but 16MB are a lot for the logic of a small webservice. 3) I have to restart tomcat all 3 days or so. It looks like a memory leak or something similar. If the app runs for a few days the response time is quite high and the server seems to be frozen. It works again, if I restart tomcat per ssh. You can find my mvn pom file right here. Do you have some tips? Are there good tomcat alternatives?

    Read the article

  • Developing high-performance and scalable zend framework website [on hold]

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery or maybe throw Backbone.js in there, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand for developing an high performance, scalable application expected to have a lot of traffic using PHP(Zend Framework 2)...I would be interested in your approch. I should note that I'm a Zend developer, I'm working with Zend for over 3 years, this is why I'm choosing it.

    Read the article

  • lighttpd on Fedora permission issues

    - by Isaac Gateno
    I'm trying to get started with lighttpd on Fedora 16 to run a RESTful api for development. Right now even with the most basic sample config file I'm getting 404 pages when I know the pages I'm pointing at exist. From reading other questions I'm leaning towards this being a permissions issue, but I'm confused about how lighttpd runs on Fedora. There's a user called "lighttpd" not "www-data"? I can't see this user in the system-config-users tool and I can't su into it to check which permissions it has. I'm trying to point lighttpd to "/var/www/lighttpd" which has some example pages in it. The permissions for the files inside are set to -rw-r--r-- and the permissions for the folder containing them are drwxr-xr-x. Doesn't that mean that any user can view these files? I'm not sure what else I should be checking as I don't have much experience with server configuration. Any help would be appreciated. Edit: I was following the tutorial configuration here so the lighttpd.conf file contains server.document-root = "/var/www/lighttpd/" server.port = 3000 mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) and I was just trying to get the basic example page working.

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • How to enable gzip HTTP compression on Windows Azure dynamic content

    - by Steven
    Hi all, I've been trying unsuccessfully to enable gzip HTTP compression on my Windows Azure hosted WCF Restful service which returns JSON only from GET and POST requests. I have tried so many things that I would have a hard time listing all of them, and I now realise I have been working with conflicting information (regarding old version of azure etc) so think it best to start with a clean slate! I am working with Visual Studio 2008, using the February 2010 tools for Visual Studio. So, according to the following link, HTTP compression has now been enabled .. http://msdn.microsoft.com/en-us/library/ff436045.aspx ... and I've used the advice at the following page (the URL compression advice only), but I get no compression. http://blog.smarx.com/posts/iis-compression-in-windows-azure <urlCompression doStaticCompression="true" doDynamicCompression="false" dynamicCompressionBeforeCache="true" /> It doesn't help that I don't know what the difference is between urlCompression and httpCompression. I've tried to find out but to no avail! Could the fact that the tools for Visual Studio were released before the version of Azure which supports compression be a problem? I read somewhere that with the latest tools, you can choose which version of Azure OS you want to use when you publish ... but I don't know if that's true, and if it is, I can't find where to choose. Could I be using a pre-http enabled version? I've also tried blowery http compression module, but no results. Does any one have any up-to-date advice on how to achieve this? i.e. advice that relates to the current version of the Azure OS. Cheers! Steven

    Read the article

  • WCF 405 Method Not Allowed Crazy Error Help!

    - by devmania
    hi, i am going crazy i have read like 10s of articles also on stackoverflow about that i am calling webservice in restful way and should enable this in service and in webconfig, so i did that but as soon as i add the [WebGet()] Attribute i get this crazy error if i remove it then the service get called seamlessly i am using VS 2010 RC 1 IIS 7 Windows 7 here is my code [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode =AspNetCompatibilityRequirementsMode.Allowed)] public class Service2 { [OperationContract] [WebGet()] public List<Table1> GetCustomers(string numberToFetch) { using (DataClassesDataContext context = new DataClassesDataContext()) { return context.Table1s.Take(numberToFetch).ToList( ); } } } and my ASPX page Code <body xmlns:sys="javascript:Sys" xmlns:dataview="javascript:Sys.UI.DataView"> <div id="CustomerView" class="sys-template" sys:attach="dataview" dataview:autofetch="true" dataview:dataprovider="Service2.svc" dataview:fetchParameters="{{ {numberToFetch: 2} }}" dataview:fetchoperation="GetCustomers"> <ul> <li>{{name}}</li> </ul> </div> and my Web.config code <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="Service2AspNetAjaxBehavior"> <enableWebScript /> </behavior> </endpointBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <services> <service name="Service2"> <endpoint address="" behaviorConfiguration="Service2AspNetAjaxBehavior" binding="webHttpBinding" contract="Service2" /> </service> </services> </system.serviceModel> totally appreciate the help

    Read the article

  • Use XMPP instead of HTTP

    - by pavel
    Hey guys. My friend and I, we are working on iPhone application. This application uses XMPP protocol to provide chat functionality. Right now we are designing architecture for the application. So my friend is working on iPhone side, and I am ruby on rails guy. My friend suggested, that we wrap every call, that is usually served via HTTP into XMPP. So, user registration, users search, profile editing, photo uploading, everything goes via XMPP. No HTTP at all. My friend wants to use XMPP, because he says, that it's much easier to implement XMPP on client-side rather HTTP. As for me, this is bullshit, but we've got a product owner, who have been working with my friend for a long time and he trusts him. So what I'm trying to do is to convince my friend and product owner that using XMPP for what HTTP can work find — is totally not the best idea. I feel, that if we implement everything on XMPP, we will have a pain in an ass till the end of lives. But how do I prove it? P.S. I'm not against chat over XMPP, I am against users search, photo uploading, rankings, nearby search and various other restful requests. Please, leave response. Any help appreciated. A little update: Yesterday we had a long discussion. And it turns out, it's quiete hard to receive response from both XMPP and HTTP in Objective-C. Because every single object and its data should be stored in Core Data model, while this model can't be securely modified from various places. Say, if you use HTTP transport, you always want to use only HTTP transport to update data in your model. And if you use XMPP, you should always use XMPP. So, you can't use both. That's what my iPhone buddy told me. It sounds weird for me, can anyone explain me that?

    Read the article

  • Mocking WebResponse's from a WebRequest

    - by Rob Cooper
    I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests.. Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally. However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this.. Update Following Accept Will's answer was the slap in the face I needed, I knew I was missing a fundamental point! Create an Interface that will return a proxy object which represents the XML. Implement the interface twice, on that uses WebRequest, the other that returns static "responses". The interface implmentation then either instantiates the return type based on the response, or the static XML. You can then pass the required class when testing or at production to the service layer. Once I have the code knocked up, I'll paste some samples. Thanks Will :)

    Read the article

  • Could CouchDB benefit significantly from the use of BERT instead of JSON?

    - by Victor Rodrigues
    I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents. CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of. Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it. BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones? I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones. This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.

    Read the article

  • Laravel - public layout not needed in every function

    - by fischer
    I have just recently started working with Laravel. Great framework so far! However I have a question. I am using a layout template like this: public $layout = 'layouts.private'; This is set in my Base_Controller: public function __construct(){ //Styles Asset::add('reset', 'css/reset.css'); Asset::add('main', 'css/main.css'); //Scripts Asset::add('jQuery', 'http://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js'); //Switch layout template according to the users auth credentials. if (Auth::check()) { $this -> layout = 'layouts.private'; } else { $this -> layout = 'layouts.public'; } parent::__construct(); } However I get an error exception now when I try to access functions in my diffrent controllers, which should not call any view, i.e. when a user is going to login: class Login_Controller extends Base_Controller { public $restful = true; public function post_index() { $user = new User(); $credentials = array('username' => Input::get('email'), 'password' => Input::get('password')); if (Auth::attempt($credentials)) { } else { } } } The error I get, is that I do not set the content of the different variables in my public $layout. But since no view is needed in this function, how do I tell Laravel not to include the layout in this function? The best solution that I my self have come a cross (don't know if this is a bad way?) is to unset($this -> layout); from function post_index()... To sum up my question: how do I tell Laravel not to include public $layout in certain functions, where a view is not needed? Thanks in advance, fischer

    Read the article

  • Does RabbitMq do round-robin from the exchange to the queues

    - by Lancelot
    Hi, I am currently evaluating message queue systems and RabbitMq seems like a good candidate, so I'm digging a little more into it. To give a little context I'm looking to have something like one exchange load balancing the message publishing to multiple queues. I don't want to replicate the messages, so a fanout exchange is not an option. Also the reason I'm thinking of having multiple queues vs one queue handling the round-robin w/ the consumers, is that I don't want our single point of failure to be at the queue level. Sounds like I could add some logic on the publisher side to simulate that behavior by editing the routing key and having the appropriate bindings in place. But that's kind of a passive approach that wouldn't take the pace of the message consumption on each queue into account, potentially leading to fill up one queue if the consumer applications for that queue are dead. I was looking for a more pro-active way from the exchange entity side, that would decide where to send the next message based on each queue size or something of that nature. I read about Alice and the available RESTful APIs but that seems kind of a heavy duty solution to implement fast routing decisions. Anyone knows if round-robin between the exchange the queues is feasible w/ RabbitMQ then? Thanks.

    Read the article

  • Heroku Rails Internal Server Error

    - by Ryan Max
    Hello. I got a 500 Internal Sever error when I try to deploy my rails app on heroku. It works fine on my local machine, so i'm not sure what's wrong here. Seems to be something with the "sessions" on the home controller. Here is my log: ==> production.log <== # Logfile created on Sun May 09 17:35:59 -0700 2010 Processing HomeController#index (for 76.169.212.8 at 2010-05-09 17:36:00) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: relation "sessions" does not ex ist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a .attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"sessions"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): lib/authenticated_system.rb:106:in `login_from_session' lib/authenticated_system.rb:12:in `current_user' lib/authenticated_system.rb:6:in `logged_in?' lib/authenticated_system.rb:35:in `authorized?' lib/authenticated_system.rb:53:in `login_required' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.2.7) lib/thin/connection.rb:76:in `pre_process' thin (1.2.7) lib/thin/connection.rb:74:in `catch' thin (1.2.7) lib/thin/connection.rb:74:in `pre_process' thin (1.2.7) lib/thin/connection.rb:57:in `process' thin (1.2.7) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.10) lib/eventmachine.rb:256:in `run_machine' eventmachine (0.12.10) lib/eventmachine.rb:256:in `run' thin (1.2.7) lib/thin/backends/base.rb:57:in `start' thin (1.2.7) lib/thin/server.rb:156:in `start' thin (1.2.7) lib/thin/controllers/controller.rb:80:in `start' thin (1.2.7) lib/thin/runner.rb:177:in `send' thin (1.2.7) lib/thin/runner.rb:177:in `run_command' thin (1.2.7) lib/thin/runner.rb:143:in `run!' thin (1.2.7) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Rendering /disk1/home/slugs/155328_f2d3c00_845e/mnt/public/500.html (500 Interna l Server Error) And here is my home_controller.rb class HomeController < ApplicationController before_filter :login_required def index @user = current_user @user.profile ||= Profile.new @profile = @user.profile end end Does it have something the way my routes are set up? Or is it my authentication? (I am using restful authentication with Bort)

    Read the article

  • Remove duplicates from a list of nested dictionaries

    - by user2924306
    I'm writing my first python program to manage users in Atlassian On Demand using their RESTful API. I call the users/search?username= API to retrieve lists of users, which returns JSON. The results is a list of complex dictionary types that look something like this: [ { "self": "http://www.example.com/jira/rest/api/2/user?username=fred", "name": "fred", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=fred", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=fred", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=fred", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=fred" }, "displayName": "Fred F. User", "active": false }, { "self": "http://www.example.com/jira/rest/api/2/user?username=andrew", "name": "andrew", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=andrew", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=andrew", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=andrew", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=andrew" }, "displayName": "Andrew Anderson", "active": false } ] I'm calling this multiple times and thus getting duplicate people in my results. I have been searching and reading but cannot figure out how to deduplicate this list. I figured out how to sort this list using a lambda function. I realize I could sort the list, then iterate and delete duplicates. I'm thinking there must be a more elegant solution. Thank you!

    Read the article

  • Would OpenID or OAuth work for authorization/authentication on a distributed web service?

    - by David Eyk
    We're in the early stages of designing a RESTful/resource-oriented web service API for a computational lingustics application. Because many of the resources we plan to serve are rights-encumbered, a key design decision has been to specify the platform so that each resource provider can expose their own web service that complies with the API spec. This way, the rights owner maintains control over their content (and thus the ability to throttle or deny access at will) and a direct relationship with the consumer, while still being able to participate in in the collaborative network. At the same time, to simplify the job of writing a client for this service, we want to allow a client access to the distributed service through one end-point, with the server handling content negotiation and retrieval from the appropriate providers. Right now, we're at an impasse on authentication/authorization schemes. One of our number has argued for the (technical) simplicity of a central authentication registry, but others are concerned about the organizational complexity of such a scheme. It seems to me, based on an albeit limited understanding of the technologies, that a combination of OpenID and OAuth would do the trick, with a client authenticating with the end-point via OpenID, and the server taking action on the user's behalf with the various content providers using OAuth. I've only ever seen implementations (e.g. stackoverflow, twitter, etc.) where a human was present to intervene, and I still need to do more research on these technologies. Would a scheme like this work for an automated web service, or would it make the client too difficult to implement and operate?

    Read the article

  • deserialize Json using JavaScriptSerializer Custom Types

    - by Dave
    I have a RESTFUL WCF service returning a .NET custom type as json string. I am using .NET 3.5 framework and I am using JavaScriptSerializer for deserializing it in to my custom type. Serialization to json is handled by WCF. using (WebResponse resp = req.GetResponse()) { using (System.IO.StreamReader sreader = new System.IO.StreamReader(resp.GetResponseStream())) { string jsonString = sreader.ReadToEnd(); CustomType myType = new CustomType(); System.Web.Script.Serialization.JavaScriptSerializer serializer = new System.Web.Script.Serialization.JavaScriptSerializer(); myType = serializer.Deserialize<CustomType>(jsonString); return myType; } } The problem is that, I have a property (or element) of the CustomType which is generic list of another custom type ChildObject. They are in different namespaces. When I deserialize using the code above, I get all the properties of the CustomType myType including a list of ChildObject. But all the properties of the ChildObject is null. The string returned from the stream reader has all the values for the ChildObject. the values are lost when deserializing. Can anyone help me with this please?

    Read the article

  • How to marshall non-string objects with JAXB and Spring

    - by lesula
    I was trying to follow this tutorial in order to create my own restful web-service using Spring framework. The client do a GET request to, let's say http://api.myapp/app/students and the server returns an xml version of the object classroom: @XmlRootElement(name = "class") public class Classroom { private String classId = null; private ArrayList<Student> students = null; public Classroom() { } public String getClassId() { return classId; } public void setClassId(String classId) { this.classId = classId; } @XmlElement(name="student") public ArrayList<Student> getStudents() { return students; } public void setStudents(ArrayList<Student> students) { this.students = students; } } The object Student is another bean containing only Strings. In my app-servlet.xml i copied this lines: <bean id="studentsView" class="org.springframework.web.servlet.view.xml.MarshallingView"> <constructor-arg ref="jaxbMarshaller" /> </bean> <!-- JAXB2 marshaller. Automagically turns beans into xml --> <bean id="jaxbMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>com.spring.datasource.Classroom</value> <value>com.spring.datasource.Student</value> </list> </property> </bean> Now my question is: what if i wanted to insert some non-string objects as class variables? Let's say i want a tag containing the String version of an InetAddress, such as <inetAddress>192.168.1.1</inetAddress> How can i force JAXB to call the method inetAddress.toString() in such a way that it appears as a String in the xml? In the returned xml non-string objects are ignored!

    Read the article

  • How to enable and use HTTP PUT and DELETE with Apache2 and PHP?

    - by Andreas Jansson
    Hi, It should be so simple. I've followed every tutorial and forum I could find, yet I can't get it to work. I simply want to build a RESTful API in PHP on Apache2. In my VirtualHost directive I say: <Directory /> AllowOverride All <Limit GET HEAD POST PUT DELETE OPTIONS> Order Allow,Deny Allow from all </Limit> </Directory> Yet every PUT request I make to the server, I get 405 method not supported. Someone advocated using the Script directive, but since I use mod_php, as opposed to CGI, I don't see why that would work. People mention using WebDAV, but to me that seems like overkill. After all, I don't need DAV locking, a DAV filesystem, etc. All I want to do is pass the request on to a PHP script and handle everything myself. I only want to enable PUT and DELETE for the clean semantics. Thanks, Andreas

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >