Search Results

Search found 18876 results on 756 pages for 'request validation'.

Page 206/756 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • IIS 6 windows 2003 help installing SSL cert

    - by ADAM
    I requested a new ssl cert from godaddy which has been issued. When try to install it in iis through the website directory security tab i get a "the pending certificate request for this response file was not found. this request may be cancelled. you cannot install selected response certificate using this wizard" error. I may have run the wizard and deleted the pending request. Is there any way i can install the certificate without getting a new one? (i hope so) I have the original certrequest.txt file

    Read the article

  • View another persons calendar details in Outlook 2010

    - by SqlRyan
    I know how to view somebody else's calendar - there are 100 walk-throughs like this one on Google. However, this feature has changed in Outlook 2010, and you no longer get prompted for rights to view another person's calendar, and Outlook just displays their "Free/Busy" information, which doesn't help me. I'd like to request permissions to view the details of their appointments, but I can't find any place to request permissions on their calendar - Outlook 2010 just gives me "Free/Busy" rights and then appears to have no option to request additional rights. Can anybody point me in the right direction?

    Read the article

  • Munin Aggregate Graphs from several servers

    - by Sparsh Gupta
    I am using DNS round robin load balancing and have divided my total traffic onto multiple servers. Each server does around 300-400req/second but I am interested in having an aggregate graph telling me the TOTAL of all requests per second served by our architecture. Is there any way I can do this. Right now each graph in Munin comes as a separate graph as they depict things on one server. I am using configuration as follow which doesn't work doesnt work for me, does this configuration got errors? [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Diskless with Ubuntu 12.04

    - by user139462
    I'm trying to setup a new diskless solution with ubuntu 12.04 without any success. I followed this howto: https://help.ubuntu.com/community/DisklessUbuntuHowto But the initramfs seems not to be able to mount my nfs share. On my server side: My /etc/exports /srv/nfs4 192.168.0.0/24(fsid=0,rw,no_subtree_check) /srv/nfs4/nfsroot 192.168.0.0/24(rw,no_root_squash,no_subtree_check,fsid=1,nohide,insecure,sync) I'm able to mount my nfs share on standard Ubuntu installation without any problem. I can mount my nfs on any client with those commands: mount 192.168.0.3:/nfsroot /mnt or mount 192.168.0.3:/srv/nfs4/nfsroot /mnt My /tftpboot/pxelinux.cfg/default config file is DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/nfsroot ip=dhcp rw I also tried DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/srv/nfs4/nfsroot ip=dhcp rw. What I got in initramfs: With the setting [nfsroot=192.168.0.3:/nfsroot] Diskless output: mount call failed - server replied: Permission denied On Syslog of my nfs server: rpc.mountd[1266]: refused mount request from 192.168.0.10 for /nfsroot (/): not exported With the setting [nfsroot=192.168.0.3:/srv/nfs4/nfsroot] Diskless output: mount: the kernel lacks NFS v3 support On Syslog of my nfs server I got: Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: authenticated mount request from 192.168.0.10:834 for /srv/nfs4/nfsroot (/srv/nfs4/nfsroot) Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: refused unmount request from 192.168.0.10 for /root (/): not exported

    Read the article

  • Intercept Apache communication

    - by Nathan Adams
    I am looking to develop a solution that eliminates potential spammers. The way this system will work is that it will watch connections and requests. Going into the specifics is more for stackoverflow, But, what I am interested in is if it is possible to tell Apache to pass the request over to my application first and give it the ability to accept/deny the request. Sure, it will make requests slower, but I think that is a trade off I am willing to take. I still want, however, Apache to run the request through any interpreters (such as PHP). The idea is that one wouldn't have to implement anti-spam measures on a per app basis but have an "umbrella" of spam protection.

    Read the article

  • Is encoding needed in this decryption?

    - by Lijo
    I have a Encryption – Decryption scenario as shown below. //[Clear text ID string as input] -- [(ASCII GetByte) + Encoding] -- [Encrption as byte array] -- [Database column is in VarBinary] -- [Pass byte[] as VarBinary parameter to SP for comparison] //[ID stored as VarBinary in Database] -- [Read as byte array] -- [(Decrypt as byte array) + Encoding + (ASCII Get String)] -- Show as string in the UI My question is in the decryption scenario. After decryption I get a byte array. I am doing an encoding (IBM037) after that. Is it correct? Is there something wrong in the flow shown above? private static byte[] GetEncryptedID(string id) { Interface_Request input = new Interface_Request(); input.RequestText = Encodeto64(id); input.RequestType = Encryption; ProgramInterface inputRequest = new ProgramInterface(); inputRequest.Test_Trial_Request = input; using (KTestService operation = new KTestService()) { return ((operation.KTrialOperation(inputRequest)).Test_Trial_Response.ResponseText); } } private static string GetDecryptedID(byte[] id) { Interface_Request input = new Interface_Request(); input.RequestText = id; input.RequestType = Decryption; ProgramInterface request = new ProgramInterface(); request.Test_Trial_Request = input; using (KTestService operationD = new KTestService()) { ProgramInterface1 response = operationD.KI014Operation(request); byte[] decryptedValue = response.ICSF_AES_Response.ResponseText; Encoding sourceByteFormat = Encoding.GetEncoding("IBM037"); Encoding destinationByteFormat = Encoding.ASCII; //Convert from one byte format to other (IBM to ASCII) byte[] ibmEncodedBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat,decryptedValue); return System.Text.ASCIIEncoding.ASCII.GetString(ibmEncodedBytes); } } private static byte[] EncodeTo64(string toEncode) { byte[] dataInBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode); Encoding destinationByteFormat = Encoding.GetEncoding("IBM037"); Encoding sourceByteFormat = Encoding.ASCII; //Convert from one byte format to other (ASCII to IBM) byte[] asciiBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat, dataInBytes); return asciiBytes; }

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Windows Metro Requests

    - by Scott Dorman
    Windows 8 and Windows Metro style apps have a lot of potential, but only if application vendors realize there is a demand to see their app as a Metro style app and not just as a desktop app (or worse, only as an Android or iOS app). As consumers, the only thing we can do is be vocal about our desire to see these apps on Windows 8 as a Metro style app. In an effort to raise awareness, I just launched WinMetro Requests. This is our opportunity to request Windows Metro style apps  and show those companies just how much interest there is for seeing their app as a Metro style app. This site is running on UserVoice, so it allows you to easily submit application requests, add comments, and, more importantly, vote for your favorite applications to come to Windows as a Metro style app! As I find out the status of requested applications, I will update the status of the request. If you know and have official communication from one of the companies indicating they will be or are working on a Windows Metro style app, please let me know and I'll update the status of the request after verifying (or at least trying to verify) the information.

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • How to port forward https traffic via ssh and/or remote desktop to through several networks and PCs?

    - by donttellya
    I have the following environment: In company X I develop a application on a pc A in network A with ip address 192.168.100.50 which has to do an https request to an http server located in the intranet of company Y In company X is another pc B in network B with ip address 192.168.200.100 pc B (of company X) can access the intranet from company Y via ssh tunnel (putty) pc A (of company X) can ping pc B (of company X) note: pc A can also do a remote desktop connection to pc B) pc B can ping the http sever pc A can not ping the http server How can the https request from pc A of company X get to the http server of company Y? On which pc must be putty configured? And which settings for host, port forwarding etc. has to be done in putty? So finally the https request should go from PC A - PC B - Http Server in company Y.

    Read the article

  • Html.ValidationSummary and Multiple Forms

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/html.validationsummary-and-multiple-forms.aspxThe Html.ValidationSummary helper writes a div with a list of general errors added to the model state while a request is being serviced. There is generally one form per view or partial view, I think, so often there is only one call to Html.ValidationSummary in the page resulting from the assembly of your views. And, consequently, there is no problem with the markup that Html.ValidationSummary spits out as a result. What if you want to put multiple forms in one view? Even if you create a view model that’s an aggregate of the view models for each form, the error validation summary is going to contain errors from both forms. Check out this screen shot, which shows a page with multiple forms. Notice how the error validation summary shows up twice. Grrr! Errors for the login form also show up in the registration form. Luckily, there is an easy way around this. Pull the errors out of the model state and separate them for each form. You’ll need to identify the appropriate form by setting the key when you make calls to ModelState.AddModelError. Assume in my example that errors for the login form are added to model state using the “LoginForm” key. And, likewise, assume that errors for the registration form are added to model state using the “RegistrationForm” key. An example of that might look like this… // If we got this far, something failed, redisplay form ModelState.AddModelError("LoginForm", "User name or password is not right..."); return View(model); Over in the code for your View, you can pull each form’s errors from the model state using lambda expressions that look like these… var LoginFormErrors = ViewData.ModelState.Where(ms => ms.Key == "LoginForm"); var RegistrationFormErrors = ViewData.ModelState.Where(ms => ms.Key == "RegistrationForm"); Now that you have two collections containing errors, you can display only the errors specific to each form. I’m doing that in my code by removing the calls to Html.ValidationSummary and replacing them with enumerators that look like this… if(LoginFormErrors.Count() > 0) { <div class="cdt-error-list">     <ul>     @foreach (var entry in LoginFormErrors)     {         foreach (var error in entry.Value.Errors)         {             <li>@error.ErrorMessage</li>         }     }     </ul> </div> } …and for the registration form, the code looks like this… @if(RegistrationFormErrors.Count() > 0) { <div class="cdt-error-list">     <ul>     @foreach (var entry in RegistrationFormErrors)     {         foreach (var error in entry.Value.Errors)         {             <li>@error.ErrorMessage</li>         }     }     </ul> </div> } The result is a nice clean separation of the list of errors that are specific to each form. And, this is important because each form is submitted separately in my case, so both forms don’t generate errors in the same context. As you’ll see in the screen shot below, errors added to the model state when the login form is submitted do not show up in the registration form’s validation summary.

    Read the article

  • Would form keys reduce the amount of spam we receive?

    - by David Wilkins
    I work for a company that has an online store, and we constantly have to deal with a lot of spam product reviews, and bogus customer accounts. These are all created by automated systems and are more of a nuisance than anything. What I am thinking of (in lieu of captcha, which can be broken) is adding a sort of form key solution to all relevant forms. I know for certain some of the spammers are using XRumer, and I know they seldom request a page before sending us the form data (Is this the definition of CSRF?) so I would think that tying a key to each requested form would at least stem the tide. I also know the spammers are lazy and don't check their work, or they would see that we have never posted a spam review, and they have never gained any revenue from our site. Would this succeed in significantly reducing the volume of spam product reviews and customer account creations we are seeing? EDIT: To clarify what I mean by "Form Keys": I am referring to creating a unique identifier (or "key") that will be used as an invisible, static form field. This key will also be stored either in the database (relative to the user session) or in a cookie variable. When the form's target gets a request, the key must be validated for the form's data to be processed. Those pesky bots won't have the key because they don't load the javascript that generates the form (they just send a blind request to the target) and even if they did load the javascript once, they'd only have one valid key, and I'm not sure they even use cookies.

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • HAPROXY per domain redirection

    - by SecondThought
    I'm trying to redirect requests to my load balancer by domain name with acl and hdr_dom, to a separate backend. The redirection works ok with the first request - 'GET /' (the destination server is a WordPress site) but when the client asks for the assets ('GET /blablabla/style.css' for example) the haproxy doesn't redirect it to the right backend anymore, but to the default one, with . In the haproxy log I can see the correct host that the request is for (the one that I defined in hdr_dom) but it's like that since the GET request itself is relative (I mean not containing the domain but only from the /blablabla and forth), haproxy doesn't recognize it with the hdr_dom. I'm just guessing here.. Please help...

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • WCF REST Error Handler

    - by Elton Stoneman
    I’ve put up on GitHub a sample WCF error handler for REST services, which returns proper HTTP status codes in response to service errors.   The code is very simple – a ServiceBehavior implementation which can be specified in config to tag the RestErrorHandler to a service. Any uncaught exceptions will be routed to the error handler, which sets the HTTP status code and description in the response, based on the type of exception.   The sample defines a ClientException which can be thrown in code to indicate a problem with the client’s request, and the response will be a status 400 with a friendly error message:       throw new ClientException("Invalid userId. Must be provided as a positive integer");   - responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/lastLogin?userId=xyz   Error Status Code: 400, Description: Invalid userId. Must be provided as a positive integer   Any other uncaught exceptions are hidden from the client. The full details are logged with a GUID to identify the error, and the response to the client is a status 500 with a generic message giving them the GUID to follow up on:       var iUserId = 0;     var dbz = 1 / iUserId;   - logs the divide-by-zero error and responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/dbz     Error Status Code: 500, Description: Something has gone wrong. Please contact our support team with helpdesk ID: C9C5A968-4AEA-48C7-B90A-DEC986F80DA5   The sample demonstrates two techniques for building the response. For client exceptions, a friendly HTML response is sent in the body as well as the status code and description. Personally I prefer not to do that – it doesn’t make sense to get a 400 error and find text/html when you’re expecting application/json, but it’s easy to do if that’s the functionality you want. The other option is to send an empty response, which the sample does with server exceptions.   The obvious extension is to have multiple exceptions representing all the status codes you want to provide, then your code is as simple as throwing the relevant exception – UnauthorizedException, ForbiddenExeption, NotImplementedException etc – anywhere in the stack, and it will be handled nicely.

    Read the article

  • Amazon EC2 Elastic Load Balancing - strategy for zero downtime server restart

    - by Yoga
    I have 5 web servers (Apache/mod_perl) behind Amazon EC2 Elastic Load Balancing, when I deploy codes to the web servers, I am doing this.. For each machine, shutdown the Apache Update the code Start over the server and proceed to the next server I think when my server is shutdown, ELB will not distribute request to my server, but how about the request still serving? I think a better approach is Stop accepting new request from ELB Sleep for sometimes, shutdown web server only if all requests are responded Update the codes Start the server again But how to perform (1) and (2) from my local sever? Do I need to use AWS API? or other easy way to do it? Thanks.

    Read the article

  • Can not access to my apache server remotely

    - by Jichao
    I have bought a VPS server, setted the apache server. But I could only access the webpage from local, I thought maybe the server did not recieveing access from outside. I tried Firefox, but the access_log shows nothing accessed. But telnet http://www.59lt.com 80 and type nonsence code, I recieved following error: and the access_log under /etc/httpd/logs also caught the acess, This proved that the server do access request from outside, so why it ignore the normal request from Firefox, but choosed to recieving request from telnet? Thanks. PS: I'm using CentOS + yum installed apache(just now installed).

    Read the article

  • How can I redirect URLs using the proxy module in Apache?

    - by LearningIT
    This seems like a super-basic question but I am having a hard time tracking down a straightforward solution, so appreciate any help and patience with me on this: I want to configure my Apache proxy server to redirect certain URLs so that, for example, a web browser HTTP request for www.olddomain.com gets passed to the proxy server which then routes the request to www.newdomain.com which sends a response to the proxy server which then passes it back to the web browser. Seems so simple, yet I don't see how to achieve this on Apache. I know Squid/Squirm offer this functionality so am guessing I am missing something really basic. I know I can use RewriteRule to dynamically modify the URL and pass it to the proxy server, but I effectively want to do the reverse, whereby the proxy server receives the original URL, applies the RewriteRule, and then forwards the HTTP request to the new URL. Hope that makes sense. Thanks in advance for any help.

    Read the article

  • Application to handle form approval

    - by ChrisMuench
    Hello, Hopefully this is the right place for this question. I have done a fair amount of research and yet to find anything that matches what I want. What I'm envisioning is the following. Let me know if any of you know of a program that will do what I want. Also it must be web-based anom user - fills out form - email gets sent to admin saying xyz has filled out form abc with links to approve/disapprove request. admin can also login and edit form and resent results to original submitter. Also once the admin approves/disapproves request the original submitter gets an approve/disapprove email. and you can search by date submitted, specific project/form, status of request(submitted, approved, disapproved). any ideas all on where I could find this? I started to look into drupal with workflows and actions but it just doesn't flow right for this

    Read the article

  • A tip: Updating Data in SharePoint 2010 using REST API

    - by Sahil Malik
    SharePoint 2010 Training: more information Here is a little tip that will save you hours of head scratching. See there are two ways to update data in SharePoint using REST based API. A PUT request is used to update an entire entity. If no values are specified for fields in the entity, the fields will be set to default values. A MERGE request is used to update only those field values that have changed. Any fields that are not specified by the operation will remain set to their current value.   Now, sit back and think about it. You are going to update the entire entity! Hmm. Which means, you need to a) specify every column value, and b) ensure that the read only values match what was supplied to you. What a pain in the donkey! So 99/100 times, a PUT request will give you a HTTP 500 internal server error occurred, which is just so helpful. Read full article ....

    Read the article

  • MVC design patterns

    - by insane-36
    I have an application and it does not use a very good structure. However it seems to me that I have tried to stick to mvc design pattern but a senior engineer claims that I have no design patterns and code are mesh. How I have structured the code : I have couple of nsmanagedobject model classes which represents model in my case and a reskit library which encapsulates the nsurlconnection and url request. I fetch the request from the view controller itself and then when the request get completed I create predicate and then populate it in tableview. Wherever I need custom view either I create it in nib or create in a custom subclass of UIView. I have use delegation pattern and notification to communication to view controller, views and block callback with restkit. But, the senior engineer is very new to ios. He has been doing it for 2 months now but he is a good java programmer. So, what is mvc pattern ? Is core data model not working as a model objects, view controller as controller and views. I dont seem to find any other places or any other cases to create my own model object since the most of the models are used as NSManagedObject subclass.

    Read the article

  • OOF (Out of Office) is not working for remote users (Outlook Anywhere)

    - by Doughecka
    I'm not sure how long this issue has been happening, but recently a few of the remote sales users were going to a sales meeting and wanted to set their Out of Office... however in Outlook 2010, they get this error message: "Your automatic reply settings cannot be displayed because the server is currently unavailable" When I run the Exchange Remote Connectivity Analyzer, Autodiscover completes fine, but the next step fails: Exception details: Message: The request failed. The remote server returned an error: (403) Forbidden. Type: Microsoft.Exchange.WebServices.Data.ServiceRequestException Stack trace: at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request) at Microsoft.Exchange.WebServices.Data.MultiResponseServiceRequest`1.Execute() at Microsoft.Exchange.WebServices.Data.ExchangeService.BindToFolder[TFolder](FolderId folderId, PropertySet propertySet) at Microsoft.Exchange.Tools.ExRca.Tests.EnsureEmptyFolderTest.PerformTestReally() Exception details: Message: The remote server returned an error: (403) Forbidden. Type: System.Net.WebException Stack trace: at System.Net.HttpWebRequest.GetResponse() at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.GetResponse() at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request) I've done some research, but I have yet to find a working fix for this... it seems like some permissions are messed up in IIS, but I haven't figured out what.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >