Search Results

Search found 14226 results on 570 pages for 'feature requests'.

Page 461/570 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • Request size limitation when using MultipartHttpServletRequest of Spring 3.0

    - by Spiderman
    I'd like to know what is the size limitation if I upload list of files in one client's form submition using HTTP multipart content type. On the server side I am using Spring's MultipartHttpServletRequest to handle the request. mM questions: Is there should be different file size limitation and total request size limitation or file size is the only limitation and the request is capable of uploading 100s of files as lonng as they are not too large. Doest the Spring request wrapper read the complete request and store it in the JAVA heap memory or it store temporaray files of it to be able to use big quota. Is the use of reading the httpservlet request in streaming would change the size limitation than using complete http request read at-once by the application server. What is the bottleneck of this process - Java heap size, the quota of the filesystem on which my web-server runs, the maximum allowed BLOB size that the DataBase in which I am gonna save the file alows? or Spring internal limitations? Related threads that still don't have exact answer to this: does-spring-framework-support-streaming-mode-in-mutlipart-requests is-there-a-way-to-get-raw-http-request-stream-from-java-servlet-handler how-to- drop-body-of-a-request-after-checking-headers-in-servlet apache-commons-fileupload-throws-malformedstreamexception

    Read the article

  • Componentizing complex functionality in an MVC web app

    - by NXT
    Hi Everyone, This is question about MVC web-app architecture, and how it can be extended to handle componentizing moderately complex units of functionality. I have an MVC style web-app with a customer facing credit card charge page. I've been asked to allow the admins to enter credit card payments as well, for times when credit cards are taken over the phone. The customer facing credit card charge section of the website is currently it's own controller, with approximately 3 pages and a login. That controller is responsible for: Customer login credential authentication Credit card data collection Calling a library to do the actual charge. reporting the results to the user. I would like to extract the card data collection pages into a component of some kind so that I can easily reuse the code on the admin side of the app. Right now my components are limited to single "view" pages with PHP style embedded Perl code. This is a simple, custom MVC framework written in Perl. Right now, controllers are called directly from the framework to service web requests. My idea is to allow controllers to be called from other controllers, so that I can componentize more complex functionality. For simplicity I think I prefer composition over inheritance, even though it will require writing a bunch of pass-through methods (actions). Being Perl, I could in theory do multiple inheritance. I'm wondering if anyone with experience in other MVC web frameworks can comment on how this sort of thing is usually done. Thank you.

    Read the article

  • Server.TransferRequest returns blank page on specific server

    - by jishi
    I'm facing an issue that seems to be related to configuration. I have a webapplication based on MonoRail, where we utilize the routing feature from MonoRail. On the first request after the application has started, the routing isn't initialized. To circumvent this, I have the following code in Application_OnError(): public virtual void Application_OnError() { if ( // identified as routing error ) Server.TransferRequest( Context.Request.RawUrl, false ); return; } Problem beeing that on our development server (which runs server 2008 R2, with IIS 7.5 and .NET 3.5) returns a blank page without headers, but on my workstation (which runs win7, IIS 7.5 and .NET 3.5) it works fine. What could be the cause of this? If the code in Application_OnError() throws an exception, what would be the expected output? I have verified the following: The access-log shows one entry, I'm not sure if a TransferRequest would show up as a second entry when invoked successfully The application actually do some work according to my internal logs, and it doesn't die since a subsequent requests works flawlessly (because routing will be active) Any hints on what to look for would be greatly appreciated!

    Read the article

  • What is the IoC / "Springy" way to handle MVP in GWT? (Hint, probably not the Spring Roo 1.1 way)

    - by Ehrann Mehdan
    This is the Spring Roo 1.1 way of doing a factory that returns a GWT Activity (Yes, Spring Framework) public Activity getActivity(ProxyPlace place) { switch (place.getOperation()) { case DETAILS: return new EmployeeDetailsActivity((EntityProxyId<EmployeeProxy>)place.getProxyId(), requests, placeController, ScaffoldApp.isMobile() ? EmployeeMobileDetailsView.instance() : EmployeeDetailsView.instance()); case EDIT: return makeEditActivity(place); case CREATE: return makeCreateActivity(); } throw new IllegalArgumentException("Unknown operation " + place.getOperation()); } It seems to me that we just went back hundred of years if we use a switch case with constants to make a factory. Now this is official auto generated Spring roo 1.1 with GWT / GAE integration, I kid you not I can only assume this is some executives empty announcements because this is definitly not Spring It seems VMWare and Google were too fast to get something out and didn't quite finish it, isn't it? Am I missing something or this is half baked and by far not the way Spring + GWT MVP should work? Do you have a better example of how Spring, GWT (2.1 MVP approach) and GAE should connect? I would hate to do all the plumbing of managing history and activities like this. (no annotations? IOC?) I also would hate to reinvent the wheel and write my own Spring enhancement just to find someone else did the same, or worse, find out that SpringSource and Google will release roo 1.2 soon and make it right

    Read the article

  • How to encapsulate a third party complex object structure?

    - by tangens
    Motivation Currently I'm using the java parser japa to create an abstract syntax tree (AST) of a java file. With this AST I'm doing some code generation (e.g.: if there's an annotation on a method, create some other source files, ...) Problem When my code generation becomes more complex, I've to dive deeper into the structure of the AST (e.g. I have to use visitors to extract some type information of method parameters). But I'm not sure if I want to stay with japa or if I will change the parser library later. Because my code generator uses freemarker (which isn't good at automatic refactoring) I want the interface that it uses to access the AST information to be stable, even if I decide to change the java parser. Question What's the best way to encapsulate complex datastructures of third party libraries? I could create my own datatypes and copy the parts of the AST that I need into these. I could create lots of specialized access methods that work with the AST and create exactly the infos I need (e.g. the fully qualified return type of a method as one string, or the first template parameter of a class). I could create wrapper classes for the japa datastructures I currently need and embed the japa types inside, so that I can delegate requests to the japa types and transform the resulting japa types to my wrapper classes again. Which solution should I take? Are there other (better) solutions to this problem?

    Read the article

  • Why is PHP discriminating between .php and .abc extensions for caching?

    - by Sam
    There seems to be a problem between how PHP engine handles identical files that differ only in their file extension. Problem: "An If-Modified-Since conditional request returned the full content unchanged." Also, I measured that the .php extension loads much faster than identitcal twin with .xxx extension even though the file contents are identical, and they differ only in their file extension. "HTTP allows clients to make conditional requests to see if a copy that they hold is still valid. Since this response has a Last-Modified header, clients should be able to use an If-Modified-Since request header for validation. RED has done this and found that the resource sends a full response even though it hadn't changed, indicating that it doesn't support Last-Modified validation." homepage ending with .php exact same file, but ending .ast Given: A home.php file is copied as home.xxx and this extension is added to htaccess to recognize it as a PHP file. The .php file listen to the php.ini where freshness is set to 3 hrs, the non .php files have to listen to htaccess where freshness is set to 2 hrs according to: AddType application/x-httpd-php .php .ast .abc .xxx .etc <IfModule mod_headers.c> ExpiresActive On ExpiresDefault M2419200 Header unset ETag FileETag None Header unset Pragma Header set Cache-Control "max-age=2419200" ##### DYNAMIC PAGES <FilesMatch "\\.(ast|php|abc|xxx)$"> ExpiresDefault M7200 Header set Cache-Control "public, max-age=7200" </FilesMatch> </IfModule> So far so good and everything loads, except, the non-php file doesn't cache properly, or it does cache well but doesn't validate well, to be more specific. See images enclosed. Only the non-php file extension causes the error and loads slower. The entire page.php loads faster as somehow all the elements in there then load properly from cache, while the page.abc has the full request returned while it ought to be cached, meaning the entire page is slower. Bottom line: What should be changed, in order eliminate the If-Modified-Since conditional request returning the full content unchanged?

    Read the article

  • What caused the rails application crash?

    - by so1o
    I'm sure someone can explain this. we have an application that has been in production for an year. recently we saw an increase in number of support requests for people having difficulty signing into the system. after scratching our head because we couldn't recreate the problem in development, we decided we'll switch on debug logger in production for a month. that was june 5th. application worked fine with the above change and we were waiting. then yesterday we noticed that the log files were getting huge so we made another change in production config.logger = Logger.new("#{RAILS_ROOT}/log/production.log", 50, 1048576) after this change, the application started crashing while processing a particular file. this particular line of code was RAILS_DEFAULT_LOGGER.info "Payment Information Request: ", request.inspect as you can see there was a comma instead of a plus sign. this piece of code was introduced in Mar. the question is this: why did the application fail now? if changing the debug level caused the application to process this line of code it should have started failing on june 5th! why today. please someone help us. Are we missing the obvious here? if you dont have an answer, at least let us know we aren't the only one that are bonkers.

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • Implementation of MVC with SQLite and NSURLConnection, use cases?

    - by user324723
    I'm interested in knowing how others have implemented/designed database & web services in their iphone app and how they simplified it for the entire application. My application is dependent on these services and I can't figure out a efficient way to use them together due to the (semi)complexity of my requirements. My past attempts on combining them haven't been completely successful or at least optimal in my mind. I'm building a database driven iphone app that uses a relational database in sqlite and consumes web services based on missing content or user interaction. Like this hasn't been done before...right? Since I am using a relational database - any web services consumed requires normalization, parsing the result and persisting it to the database before it can be displayed in a table view controller. The applications UI consists of nested(nav controller) table views where a user can select a cell and be taken to the next table view where it attempts to populate the table views data source from the database. If nothing exists in the database then it will send a request via web services to download its content, thus download - parse - persist - query - display. Since the user has the ability to request a refresh of this data it still requires the same process. Quickly describing what I've implemented and tried to run with - 1st attempt - Used a singleton web service class that handled sending web service requests, parsing the result and returning it to the table view controller via delegate protocols. Once the controller received that data it would then be responsible for persisting it to the database and re-returning the result. I didn't like the idea of only preventing the case where the app delegate selector doesn't exists(released) causing the app to crash. 2nd attempt - Used NSNotificationCenter for easy access to both database and web services but later realized it was more complex due to adding and removing observers per view(which isn't advised anyways).

    Read the article

  • Zend Partial + Zend Action Helper causes an additional request to bootstrap?

    - by AndreLiem
    I've been profiling some zend framework code with webgrind to see where some bottle necks are and I'm noticing some very odd behavior. Using the zend partial for example, if I pass a variable value that comes from a zend action helper, it results in two requests being made. in sample.phtml echo $this->partial('partial/embed.phtml', array('url' => $this->url)); in indexcontroller.php $this->view->url = $this->_helper->Embed()->url; But if I don't pass the value from the helper to the partial, but still run the helper, it only makes one request in webgrind. e.g. $this->view->url = 'test'; $this->_helper->Embed()->url; Does anybody know why this could be happening? Am I potentially interpreting web grind incorrectly, or is it really calling the bootstrap twice when the an action helper value is tied to a partial? I'm starting to realize how inefficient some components of Zend are. Thanks

    Read the article

  • trouble setting up anonymous login in ejabberd

    - by sofia
    Hi, In ejabberd.cfg I have the following {host_config, "thisislove-MacBook-2.local", [{auth_method, [internal, anonymous]}, {allow_multiple_connections, false}, {anonymous_protocol, both}]}. but when using speeqe javascript client (speeqe.com) to connect, I see it sends <body rid='1366284187' xmlns='http://jabber.org/protocol/httpbind' to='thisislove-macbook-2.local' xml:lang='en' wait='60' hold='1' window='5' content='text/xml; charset=utf-8' ver='1.6' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh'/> and the server responds with <body xmlns='http://jabber.org/protocol/httpbind' sid='f89bf034b02fa6b884bb0c55be3f1f69e45e3866' wait='60' requests='2' inactivity='30' maxpause='120' polling='2' ver='1.8' from='thisislove-macbook-2.local' secure='true' authid='353072658' xmlns:xmpp='urn:xmpp:xbosh' xmlns:stream='http://etherx.jabber.org/streams' xmpp:version='1.0'><stream:features xmlns:stream='http://etherx.jabber.org/streams'><mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><mechanism>DIGEST-MD5</mechanism><mechanism>PLAIN</mechanism></mechanisms><register xmlns='http://jabber.org/features/iq-register'/></stream:features></body> Notice the mechanisms, DIGEST-MD5 & PLAIN. If I'm not mistaken it should have ANONYMOUS as a mechanism as well. So what happens is that speeqe simply terminates the connection. As such I'm thinking i must be missing something in the anonymous configuration or the muc config. In the mod_muc configg, I have {mod_muc, [ %%{host, "conference.@HOST@"}, {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_room_name, 190}, {max_room_desc, 190}, {max_users, 500} ]} So what am I missing? Thanks

    Read the article

  • apache proxy module gives 403 forbidden error

    - by naiquevin
    I am trying to use the apache's proxy module for working with xmpp on ubuntu desktop. For this i did the following things - 1) enabled mod_proxy by creating a symlink of proxy.conf, proxy.load and proxy_http.load from /etc/apache2/mods-available/ in the mods-enabled directory. 2) Added the following lines to the vhost <Proxy http://mydomain.com/httpbind> Order allow,deny Allow from all </Proxy> ProxyPass /httpbind http://mydomain.com:7070/http-bind/ ProxyPassReverse /httpbind http://mydomain.com:7070/http-bind/ I am new to using the proxy module but what i can make from the above lines is that requests to http://mydomain.com/httpbind will be forwarded to http://mydomain.com:7070/http-bind/. Kindly correct if wrong. 3) added rule Allow from .mydomain.com in /mods-available/proxy.conf Now i try to access http://mydomain.com/httpbind and it shows 403 Forbidden error.. What am i missing here ? Please help. thanks Edit : The problem got solved when i changed the following code in mods_available/proxy.conf <Proxy *> AddDefaultCharset off Order deny,allow Deny from all Allow from mydomain.com </Proxy> to <Proxy *> AddDefaultCharset off Order deny,allow #Deny from all Allow from all </Proxy> Didnt get what was wrong with the initial code though

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • SOAP web services in haskell?

    - by Dave
    I have to write a bunch of small web services. They must be defined by a WSDL and work via SOAP-RPC, in order to work with an existing workflow engine and service registry framework. I can, however, serve them on a service stack/platform of my choice. I'm presently writing them in Java, and it's not too bad. But I'm thinking my life might be easier if I was able to write these services in Haskell. Searching on Google, it looks like, once upon a time, someone else had the same idea and started a project called "HAIFA". However, it looks like HAIFA hasn't been maintained for some years, and I couldn't find any other frameworks supporting serving up services written in Haskell as SOAP web services. Does anyone know of any other frameworks that will allow me to easily write SOAP-based web services using Haskell? If not, has anyone done this manually (i.e., use XML libraries from hackage to process the incoming soap-rpc requests, and create soap-rpc compliant replies)? Was it difficult to do? Any gotchas? Was it worth the effort? Thanks in advance!

    Read the article

  • Precompile assets for a rails engine

    - by Peter Ehrlich
    In a standard app, I have this line in my production.rb, which creates endpoints for non-default precompiled assets: config.assets.precompile += %w( mobile.css ) My rails engine is a standard Sinatra app. It has its own assets. When on development, these assets are served fine, presumably the web requests are handled by rails and sprockets. On production I'm getting 404s on the assets, and think I have to manually tell sprockets to provide the files. How can this be done without tightly linking? It isin't evident how to set up env-specific initializers for engines. Is this done? Not only, for example, is config/development.rb within the engine not loaded, but there's no way to get the application class itself without knowing its name, in order to modify configuration. And even if there was, it seems that having any engine able to reconfigure the main app would be very bad idea. So maybe its better to let assets handling be done by sinatra itself? Or another instance of sprockets for the engine? How do other engines handle this?

    Read the article

  • Problem with impersonating a specific user in WCF service

    - by aJ
    I am having a WCF service hosted in IIS on WindowsServer 2008. This service needs to write to a shared folder present on another machine(Windows XP). The shared folder has write permissions for a particular user say "X" which is present on both the machines .i.e on the server where the service is running as well as the machine where the shared folder is present. The service runs under the NETWORK SERVICE account. For the service to access the shared folder I have added code to impersonate the user "X"in the service so that it gets the permission to write to the shared folder. Since I want to impersonate user "X" only when I run a particular section of code I have used the sample code. Even after the impersonation the service fails to write to the shared folder sometimes. It works sporadically. Whereas if I add tag in the Web.config file it works perfectly fine. <identity impersonate="true" userName="accountname" password="password" /> But the above is not desirable since it impersonates a specific user for all the requests. What I need is to impersonate a specific user only when I run a particular section of code. Also, the impersonation code works absolutely fine when the shared folder is present on another WindowsServer 2008. Could anyone give me ideas on what's going wrong here.

    Read the article

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • Problem executing trackPageview with Google Analytics.

    - by dmrnj
    I'm trying to capture the clicks of certain download links and track them in Google Analytics. Here's my code var links = document.getElementsByTagName("a"); for (var i = 0; i < links.length; i++) { linkpath = links[i].pathname; if( linkpath.match(/\.(pdf|xls|ppt|doc|zip|txt)$/) || links[i].href.indexOf("mode=pdf") >=0 ){ //this matches our search addClickTracker(links[i]); } } function addClickTracker(obj){ if (obj.addEventListener) { obj.addEventListener('click', track , true); } else if (obj.attachEvent) { obj.attachEvent("on" + 'click', track); } } function track(e){ linkhref = (e.srcElement) ? e.srcElement.pathname : this.pathname; pageTracker._trackPageview(linkhref); } Everything up until the pageTracker._trackPageview() call works. In my debugging linkhref is being passed fine as a string. No abnormal characters, nothing. The issue is that, watching my http requests, Google never makes a second call to the tracking gif (as it does if you call this function in an "onclick" property). Calling the tracker from my JS console also works as expected. It's only in my listener. Could it be that my listener is not deferring the default action (loading the new page) before it has a chance to contact Google's servers? I've seen other tracking scripts that do a similar thing without any deferral.

    Read the article

  • How to pair users? (Like Omegle.com)

    - by Carlos Dubus
    Hi, I'm trying to do an Omegle.com clone script, basically for learning purposes. I'm doing it in PHP/MySQL/AJAX. I'm having problems finding two users and connecting them. The purpose of omegle is connecting two users "randomly". What I'm doing right now is the following: When a user enters the website a session is assigned. There are 3 states for each session/user (Normal,Waiting,Chatting) At first the user has state Normal and a field "connected_to" = NULL If the users clicks the START button, a state of "Waiting" is assigned. Then it looks for another user with state Waiting, if doesn't find one then it keeps looping, waiting for the "connected_to" to change. The "connected_to" will change when other user click START and then find another user waiting and updates the sessions accordingly. Now this have several problems, like: A user only can be connected to one user at a time. In omegle you can open more than one chat simultaneously. I don't know if this is the best way. About the chat, each user is polling the events from the server with AJAX calls, I saw that omegle, instead of several HTTP requests each second (let's say), does ONE request and wait for an answer, that means that the PHP script is looping indefinitely until gets an answer.I did this using set_time_limit(30) each time the loop is started. Then when the Ajax call is done start over again. Is this approach correct? I will appreciate a LOT your answers, Thank you, Carlos

    Read the article

  • loading files through one file to hide locations

    - by Phil Jackson
    Hello all. Im currently doing a project in which my client does not want to location ( ie folder names and locations ) to be displayed. so I have done something like this: <link href="./?0000=css&0001=0001&0002=css" rel="stylesheet" type="text/css" /> <link href="./?0000=css&0001=0002&0002=css" rel="stylesheet" type="text/css" /> <script src="./?0000=js&0001=0000&0002=script" type="text/javascript"></script> </head> <body> <div id="wrapper"> <div id="header"> <div id="left_header"> <img src="./?0000=jpg&0001=0001&0002=pic" width="277" height="167" alt="" /> </div> <div id="right_header"> <div id="top-banner"></div> <ul id="navigation"> <li><a href="#" title="#" id="nav-home">Home</a></li> <li><a href="#" title="#">Signup</a></li> all works but my question being is or will this cause any complications i.e. speed of the site as all requests are being made to one single file and then loading in the appropriate data. Regards, Phil

    Read the article

  • How to use AFIncrementalStore to binding with a NSManagedObject

    - by Matrosov Alexander
    I am searching for more information on how to use AFIncrementalStore. I need to know how to implement it step by step. Please don't down vote it, because of many resources, I really need help with this. If I understood right AFIncrementalStore it is a layer for fetching data from the server and for the mapping data model. Am I right? So I have few URL that I need to mapping into my local model. All of them use GET requests. For example base_url/api/categories I get this string in response: [{"category":{"name":"3d max","id":"1111001","users":[]}}, {"category":{"name":"photoshop","id":"1111002","users":[]}}, {"category":{"name":"auto cad","id":"1111003","users":[]}}] So I have a question how I can binding my local db with this data using AFIncrementalStore. Also if you can see there are relationships in the response string that are connected to uses. The array for users will contain id that is correspond to concert users. So I think second question is how to point that model has to have relationship.

    Read the article

  • Getting started with workflows in sharepoint 2010

    - by Thomas Stock
    Hi, I'm a beginning sharepoint developer asked to implement the following scenario in sharepoint 2010. We're a bit lost on the best approach to get started.. I'm really struggling to find the best practise solution. This is the flow: A user can make a request with a title and a description. A mail gets sent to the representative with a link to a form. A representative can approve or reject the request. If approved: A mail gets sent to Board with a link to form If rejected: A mail gets sent to the user with the message that it has been rejected. when the request was approved by the representative, the board can approve or reject the request. A mail gets sent to the user and the representative with the descision of the board. So the list has the following fields: Request title Request description Representative approval Representative description Board approval Board description The user should see the following form: Request title (editable) Request description (editable) The representative should see the following form: Request title (read-only) Request description (read-only) Representative approval (editable) Representative description (editable) The Board should see the following form: Request title (read-only) Request description (read-only) Representative approval (read-only) Representative description (read-only) Board approval (editable) Board description (editable) My questions: What tool is most appropriate for making the forms? Infopath? SPD? VS2010? How do I handle rights to make sure only the board can access the board edit form? What kind of workflow do I use? When do I start the workflow(s)? What do I use to develop the workflow(s)? How do I handle rights when showing the listview with all requests? How can I build the links in the mails sent to the different groups. Thanks in advance for any advice.

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • Need help with many-to-many relationships....

    - by yuudachi
    I have a student and faculty table. The primary key for student is studendID (SID) and faculty's primary key is facultyID, naturally. Student has an advisor column and a requested advisor column, which are foreign key to faculty. That's simple enough, right? However, now I have to throw in dates. I want to be able to view who their advisor was for a certain quarter (such as 2009 Winter) and who they had requested. The result will be a table like this: Year | Term | SID | Current | Requested ------------------------------------------------ 2009 | Winter | 860123456 | 1 | NULL 2009 | Winter | 860445566 | 3 | NULL 2009 | Winter | 860369147 | 5 | 1 And then if I feel like it, I could also go ahead and view a different year and a different term. I am not sure how these new table(s) will look like. Will there be a year table with three columns that are Fall, Spring and Winter? And what will the Fall, Spring, Winter table have? I am new to the art of tables, so this is baffling me... Also, I feel I should clarify how the site works so far now. Admin can approve student requests, and what happens is that the student's current advisor gets overwritten with their request. However, I think I should not do that anymore, right?

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >