Search Results

Search found 92294 results on 3692 pages for 'http error 401 2'.

Page 221/3692 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • setcookie, Cannot modify header information - headers already sent

    - by Nano HE
    Hi,I am new to PHP, I practised PHP setcookie() just now and failed. http://localhost/test/index.php <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title></title> </head> <body> <?php $value = 'something from somewhere'; setcookie("TestCookie", $value); ?> </body> </html> http://localhost/test/view.php <?php // I plan to view the cookie value via view.php echo $_COOKIE["TestCookie"]; ?> But I failed to run index.php, IE warning like this. Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\test\index.php:9) in C:\xampp\htdocs\test\index.php on line 12 I enabled my IE 6 cookie no doubt. Is there anything wrong on my procedure above? Thank you. WinXP OS and XAMPP 1.7.3 used.

    Read the article

  • Access SSAS cube from across domains without direct database connection

    - by SuperKing
    Hello, I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string. I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here? http://technet.microsoft.com/en-us/library/cc917711.aspx I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube. Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.

    Read the article

  • Login Website, curious Cookie Problem

    - by Collin Peters
    Hello, Language: C# Development Environment: Visual Studio 2008 Sorry if the english is not perfect. I want to login to a Website and get some Data from there. My Problem is that the Cookies does not work. Everytime the Website says that I should activate Cookies but i activated the Cookies trough a Cookiecontainer. I sniffed the traffic serveral times for the login progress and I see no problem there. I tried different methods to login and I have searched if someone else have this Problem but no results... Login Page is: "www.uploaded.to", Here is my Code to Login in Short Form: private void login() { //Global CookieContainer for all the Cookies CookieContainer _cookieContainer = new CookieContainer(); //First Login to the Website HttpWebRequest _request1 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login"); _request1.Method = "POST"; _request1.CookieContainer = _cookieContainer; string _postData = "email=XXXXX&password=XXXXX"; byte[] _byteArray = Encoding.UTF8.GetBytes(_postData); Stream _reqStream = _request1.GetRequestStream(); _reqStream.Write(_byteArray, 0, _byteArray.Length); _reqStream.Close(); HttpWebResponse _response1 = (HttpWebResponse)_request1.GetResponse(); _response1.Close(); //######################## //Follow the Link from Request1 HttpWebRequest _request2 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login?coo=1"); _request2.Method = "GET"; _request2.CookieContainer = _cookieContainer; HttpWebResponse _response2 = (HttpWebResponse)_request2.GetResponse(); _response2.Close(); //####################### //Get the Data from the Page after Login HttpWebRequest _request3 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/home"); _request3.Method = "GET"; _request3.CookieContainer = _cookieContainer; HttpWebResponse _response3 = (HttpWebResponse)_request3.GetResponse(); _response3.Close(); } I'm stuck at this problem since many weeks and i found no solution that works, please help...

    Read the article

  • Does mod_php honor HEAD requests properly?

    - by rkulla
    The HTTP/1.1 RFC stipulates "The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response." I know Apache honors the RFC but modules don't have to. My question is, does mod_php5 honor this? The reason I ask is because I just came across an article saying that PHP developers should check this themselves with: if (stripos($_SERVER['REQUEST_METHOD'], 'HEAD') !== FALSE) { exit(); } but seeing as how browsers send HEAD requests for cache checking it seems unlikely to me that no book, docs, etc., advise PHP developers to do this check. I googled a second and not much turned up, other than some people saying they try to strange things like mod_rewrite/redirect after getting HEAD requests and some old bug ticket from like 2002 claiming that mod_php still executed the rest of the script by default. So I just ran a quick test by using PECL::HTTP to run http_head('http://mysite.com/test-head-request.php'); while having: <?php error_log('REST OF SCRIPT STILL RAN'); ?> in test-head-request.php to see if the rest of the script still executed, and it didn't. I figure that should be enough to settle it, but want to get more feedback and maybe help clear up confusion for anyone else who has wondered about this. So if anyone knows off the top of their head (no pun intended) - or have any conventions they use for receiving HEAD requests, that'd be great. Otherwise, I'll grep the C source later and respond in a comment with my findings. Thanks.

    Read the article

  • Translating CURL to FLEX HTTPRequests

    - by Joshua
    I am trying to convert from some CURL code to FLEX/ActionScript. Since I am 100% ignorant about CURL and 50% ignorant about Flex and 90% ignorant on HTTP in general... I'm having some significant difficulty. The following CURL code is from http://code.google.com/p/ga-api-http-samples/source/browse/trunk/src/v2/accountFeed.sh I have every reason to believe that it's working correctly. USER_EMAIL="[email protected]" #Insert your Google Account email here USER_PASS="secretpass" #Insert your password here googleAuth="$(curl https://www.google.com/accounts/ClientLogin -s \ -d Email=$USER_EMAIL \ -d Passwd=$USER_PASS \ -d accountType=GOOGLE \ -d source=curl-accountFeed-v2 \ -d service=analytics \ | awk /Auth=.*/)" feedUri="https://www.google.com/analytics/feeds/accounts/default\ ?prettyprint=true" curl $feedUri --silent \ --header "Authorization: GoogleLogin $googleAuth" \ --header "GData-Version: 2" The following is my abortive attempt to translate the above CURL to AS3 var request:URLRequest=new URLRequest("https://www.google.com/analytics/feeds/accounts/default"); request.method=URLRequestMethod.POST; var GoogleAuth:String="$(curl https://www.google.com/accounts/ClientLogin -s " + "-d [email protected] " + "-d Passwd=secretpass " + "-d accountType=GOOGLE " + "-d source=curl-accountFeed-v2" + "-d service=analytics " + "| awk /Auth=.*/)"; request.requestHeaders.push(new URLRequestHeader("Authorization", "GoogleLogin " + GoogleAuth)); request.requestHeaders.push(new URLRequestHeader("GData-Version", "2")); var loader:URLLoader=new URLLoader(); loader.dataFormat=URLLoaderDataFormat.BINARY; loader.addEventListener(Event.COMPLETE, GACompleteHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, GAErrorHandler); loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, GAErrorHandler); loader.load(request); This probably provides you all with a good laugh, and that's okay, but if you can find any pity on me, please let me know what I'm missing. I readily admit functional ineptitude, therefore letting me know how stupid I am is optional.

    Read the article

  • How to post to a request using node.js

    - by Mr JSON
    I am trying to post some json to a URL. I saw various other questions about this on stackoverflow but none of them seemed to be clear or work. This is how far I got, I modified the example on the api docs: var http = require('http'); var google = http.createClient(80, 'server'); var request = google.request('POST', '/get_stuff', {'host': 'sever', 'content-type': 'application/json'}); request.write(JSON.stringify(some_json),encoding='utf8'); //possibly need to escape as well? request.end(); request.on('response', function (response) { console.log('STATUS: ' + response.statusCode); console.log('HEADERS: ' + JSON.stringify(response.headers)); response.setEncoding('utf8'); response.on('data', function (chunk) { console.log('BODY: ' + chunk); }); }); When I post this to the server I get an error telling me that it's not of the json format or that it's not utf8, which they should be. I tried to pull the request url but it is null. I am just starting with nodejs so please be nice.

    Read the article

  • Why doesn't Default route work using Html.ActionLink in this case?

    - by StuperUser
    I have a rather perculiar issue with routing. Coming back to routing after not having to worry about configuration for it for a year, I am using the default route and ignore route for resources: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }); I have a RulesController with an action for Index and Lorem and a Index.aspx, Lorem.aspx in Views Rules directory. I have an ActionLink aimed at Rules/Index on the maseter page: <li><div><%: Html.ActionLink("linkText", "Index", "Rules")%></div></li> The link is being rendered as http://localhost:12345/Rules/ and am getting a 404. When I type Index into the URL the application routes it to the action. When I change the default route action from "Index" to "Lorem", the action link is being rendered as http://localhost:12345/Rules/Index adding the Index as it's no longer on the default route and the application routes to the Index action correctly. I have used Phil Haack's Routing Debugger, but entering the url http://localhost:12345/Rules/ is causing a 404 using that too. I think I've covered all of the rookie mistakes, relevant SO questions and basic RTFMs. I'm assuming that "Rules" isn't any sort of reserved word in routing. Other than updating the Routes and debuugging them, what can I look at?

    Read the article

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • Why is Firefox prompting to download a file that is POST'd to?

    - by alex
    This is the most peculiar thing. It is from an old in house CMS. When I attempt to submit my changes, it prompts to save the file linked in the action attribute of the form. Headers Request POST /~site/edit/articles/article_save.php?id=54 HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://example.com Content-Type: multipart/form-data; boundary=---------------------------10102754414578508781458777923 Content-Length: 940 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="title" Home Content -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="catid" 18 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="activecheck" 1 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="image" -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgToolbarSelectBlock" <p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="content" <p>Edit your article in this text box.</p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgEditor" true -----------------------------10102754414578508781458777923-- Response HTTP/0.9 200 OK And then Firefox shows.... I can't determine from the response headers as to why this is prompting to open/save. It has always worked. All other PHP files on the site work fine. Anyone have a clue? Thanks Update Apparently, it just crashes Safari.

    Read the article

  • Exclude subdirectory from rewrite rule in web.config

    - by Clog
    This question comes up often, but I can only find solutions for PHP, Apache, htaccess etc but not for web.config I would like my pages to return in HTTP not HTTPS, except for forms within certain subdirectories. I have created the following web.config file, but how do I exclude a subdirectory called forms. <configuration> <system.webServer> <rewrite> <rules> <rule name="Force all to HTTP" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="on" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Found" url="http://www.mysite.com/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Many thanks all you clever clogs.

    Read the article

  • Understanding REST: is GET fundamentally incompatible with any "number of views" counter?

    - by cocotwo
    I'm trying to understand REST. Under REST a GET must not trigger something transactional on the server (this is a definition everybody agrees upon, it is fundamental to REST). So imagine you've got a website like stackoverflow.com (I say like so if I got the underlying details of SO wrong it doesn't change anything to my question), where everytime someone reads a question, using a GET, there's also some display showing "This question has been read 256 times". Now someone else reads that question. The counter now is at 257. The GET is transactional because the number of views got incremented and is now incremented again. The "number of views" is incremented in the DB, there's no arguing about that (for example on SO the number of time any question has been viewed is always displayed). So, is a REST GET fundamentally incompatible with any kind of "number of views" like functionality in a website? So should it want to be "RESTFUL", should the SO main page either stop display plain HTML links that are accessed using GETs or stop displaying the "this question has been viewed x times"? Because incrementing a counter in a DB is transactional and hence "unrestful"? EDIT just so that people Googling this can get some pointers: From http://www.xfront.com/REST-Web-Services.html : 4. All resources accessible via HTTP GET should be side-effect free. That is, the request should just return a representation of the resource. Invoking the resource should not result in modifying the resource. Now to me if the representation contains the "number of views", it is part of the resource [and in SO the "number of views" a question has is a very important information] and accessing it definitely modifies the resource. This is in sharp contrast with, say, a true RESTFUL HTTP GET like the one you can make on an Amazon S3 resource, where your GET is guaranteed not to modify the resource you get back. But then I'm still very confused.

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • How to execute python script on the BaseHTTPSERVER created by python?

    - by user1731699
    I have simply created a python server with : python -m SimpleHTTPServer I had a .htaccess (I don't know if it is usefull with python server) with: AddHandler cgi-script .py Options +ExecCGI Now I am writing a simple python script : #!/usr/bin/python import cgitb cgitb.enable() print 'Content-type: text/html' print ''' <html> <head> <title>My website</title> </head> <body> <p>Here I am</p> </body> </html> ''' I make test.py (name of my script) an executed file with: chmod +x test.py I am launching in firefox with this addres: (http : //) 0.0.0.0:8000/test.py Problem, the script is not executed... I see the code in the web page... And server error is: localhost - - [25/Oct/2012 10:47:12] "GET / HTTP/1.1" 200 - localhost - - [25/Oct/2012 10:47:13] code 404, message File not found localhost - - [25/Oct/2012 10:47:13] "GET /favicon.ico HTTP/1.1" 404 - How can I manage the execution of python code simply? Is it possible to write in a python server to execute the python script like with something like that: import BaseHTTPServer import CGIHTTPServer httpd = BaseHTTPServer.HTTPServer(\ ('localhost', 8123), \ CGIHTTPServer.CGIHTTPRequestHandler) ###  here some code to say, hey please execute python script on the webserver... ;-) httpd.serve_forever() Or something else...

    Read the article

  • Repeated host lookups failing in urllib2

    - by reve_etrange
    I have code which issues many HTTP GET requests using Python's urllib2, in several threads, writing the responses into files (one per thread). During execution, it looks like many of the host lookups fail (causing a name or service unknown error, see appended error log for an example). Is this due to a flaky DNS service? Is it bad practice to rely on DNS caching, if the host name isn't changing? I.e. should a single lookup's result be passed into the urlopen? Exception in thread Thread-16: Traceback (most recent call last): File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "/home/da/local/bin/ThreadedDownloader.py", line 61, in run page = urllib2.urlopen(url) # get the page File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.6/urllib2.py", line 391, in open response = self._open(req, data) File "/usr/lib/python2.6/urllib2.py", line 409, in _open '_open', req) File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib/python2.6/urllib2.py", line 1170, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib/python2.6/urllib2.py", line 1145, in do_open raise URLError(err) URLError: <urlopen error [Errno -2] Name or service not known>

    Read the article

  • Mount CIFS share gives "mount error 127 = Key has expired"

    - by djk
    Hi, I'm currently replicating the setup of a CentOS box an am running into a strange error while trying to mount a samba share that resides on a NAS. The error I'm getting is: mount error 127 = Key has expired Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) The settings are identical to the old machine, the password is definitely correct as well. I have googled the issue of course, and looked at every single page that references this issue (not that many) and have still not found an answer. The older CentOS box is using version 3.0.28-0.el4.9 of Samba and the one I'm trying to setup now is 3.0.33-3.7.el5_3.1. I don't know if this has anything to do with it but it's certainly one of the only differences between the 2 setups. When I try the mount command this appears in the syslog: Sep 8 10:51:54 helvetica2 kernel: Status code returned 0xc0000072 NT_STATUS_ACCOUNT_DISABLED Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: Send error in SessSetup = -127 Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: cifs_mount failed w/return code = -127 The account is very much not disabled as it works on the old box using the same credentials. Has anyone else seen this problem? Thanks!

    Read the article

  • Request Entity Too Large error while uploading files of more than 128KB over SSL

    - by tushar
    We have a web portal setup on Java spring framework. It running on tomcat app server. Portal is served through apache web server connected to tomcat through JK connector. Entire portal is HTTPS enabled using 443 port of apache. Apache version is : Apache/2.4.2 (Unix). it is the latest stable version of apache web server. Whenever we try to upload files more than 128 KB into the portal, We are facing 413 error: Request Entity Too Large The requested resource /teamleadchoachingtracking/doFileUpload does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the apache error log we get the following errors: AH02018: request body exceeds maximum size (131072) for SSL buffer AH02257: could not buffer message body to allow SSL renegotiation to proceed We did search over google and there were suggestions to put SSLRenegBufferSize as some high value like 10MB. Based on these suggestions, we had put the following entry in virtualhost section of httpd config file: SSLRenegBufferSize 10486000 But still the error persists. Also we have specified SSLVerifyClient none, but still renegotiation is happening. This is a very inconsistent and frustrating error. Any help will be highly appreciated. Many thanks in advance.

    Read the article

  • ssh + tinyproxy: poor performance

    - by Paul
    I am currently in China and I would like to still visit some blocked websites (facebook, youtube). I have VPS in the USA and I have installed tinyproxy on it. I log in on my VPS with SSH port-forwarding and I have configured my browser appropriately. Everything works more or less: I can surf to those websites but everything is inusually slow and sometimes data transfer stops abruptly. This probably has to do with the fact that I see some errors in my shell on the VPS like : channel 6: open failed: connect failed: Also in the log-file of tinyproxy I see some bad things: ERROR Sep 06 14:52:14 [28150]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:15 [28153]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28168]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28151]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28143]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:17 [28147]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:23 [28137]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:26 [28168]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:27 [28186]: read_request_line: Client (file descriptor: 7) closed socket before read. ERROR Sep 06 14:52:31 [28160]: getpeer_information: getpeername() error: Transport endpoint is not connected

    Read the article

  • Mount CIFS share gives "mount error 127 = Key has expired"

    - by djk
    I'm currently replicating the setup of a CentOS box an am running into a strange error while trying to mount a samba share that resides on a NAS. The error I'm getting is: mount error 127 = Key has expired Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) The settings are identical to the old machine, the password is definitely correct as well. I have googled the issue of course, and looked at every single page that references this issue (not that many) and have still not found an answer. The older CentOS box is using version 3.0.28-0.el4.9 of Samba and the one I'm trying to setup now is 3.0.33-3.7.el5_3.1. I don't know if this has anything to do with it but it's certainly one of the only differences between the 2 setups. When I try the mount command this appears in the syslog: Sep 8 10:51:54 helvetica2 kernel: Status code returned 0xc0000072 NT_STATUS_ACCOUNT_DISABLED Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: Send error in SessSetup = -127 Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: cifs_mount failed w/return code = -127 The account is very much not disabled as it works on the old box using the same credentials. Has anyone else seen this problem? Thanks!

    Read the article

  • "file not found" error while commiting

    - by AntonAL
    I have a working copy, checked out from SVN repository. When i'm trying to commit, i get following error: svn: File not found: revision 57, path '/trunk/path/to/my/file/logo-mini.jpg' I've found this file in the repo and noticed, that it has only one revision - 58. I don't understand, why SVN complains about this file, when it is presented and why it points to revision 57 instead of 58 ? I've also renamed the grand-grand-grand-parent folder of this file. Possible, this is an issue ... Update Detailed error description, that i've got from Cornerstone app (Mac OS X): Description : Could not find the specified file. Suggestion : Check that the path you have specified is correct. Technical Information ===================== Error : V4FileNotFoundError Exception : ZSVNNoSuchEntryException Causal Information ================== Description : Commit failed (details follow): Status : 160013 File : subversion/libsvn_client/commit.c, 867 Description : File not found: revision 57, path '/trunk/assets/themes/base/article-content/images/logo-mini.jpg' Status : 160013 File : subversion/libsvn_fs_fs/tree.c, 663 So, i've renamed "/trunk/assets/themes directory" to "/trunk/assets/skins", while improving project structure. I've tried following: updating /trunk/assets/themes directory cleaning deleting from filesytem and checking out again reverting entire /trunk/assets/themes directory to the HEAD revision. Even this does't helps. Still getting the same error. I've got no results.

    Read the article

  • WinXP: Error 1167 -- Device (LPT1) not connected

    - by Thomas Matthews
    I am writing a program that opens LPT1 and writes a value to it. The WriteFile function is returning an error code of 1167, "The device is not connected". The Device Manager shows that LPT1 is present. I have a cable connected between a development board and the PC. The cable converts JTAG pin signals to signals on the parallel port. Power is applied and the cable is connected between the development board and the PC. The development board is powered on. I am using: Windows XP MS Visual Studio 2008, C language, console application, debug environment. Here is the relevant code fragments: HANDLE parallel_port_handle; void initializePort(void) { TCHAR * port_name = TEXT("LPT1:"); parallel_port_handle = CreateFile( port_name, GENERIC_READ | GENERIC_WRITE, 0, // must be opened with exclusive-access NULL, // default security attributes OPEN_EXISTING, // must use OPEN_EXISTING 0, // not overlapped I/O NULL // hTemplate must be NULL for comm devices ); if (parallel_port_handle == INVALID_HANDLE_VALUE) { // Handle the error. printf ("CreateFile failed with error %d.\n", GetLastError()); Pause(); exit(1); } return; } void writePort( unsigned char a_ucPins, unsigned char a_ucValue ) { DWORD dwResult; if ( a_ucValue ) { g_siIspPins = (unsigned char) (a_ucPins | g_siIspPins); } else { g_siIspPins = (unsigned char) (~a_ucPins & g_siIspPins); } /* This is a sample code for Windows/DOS without Windows Driver. */ // _outp( g_usOutPort, g_siIspPins ); //---------------------------------------------------------------------- // For Windows XP and later //---------------------------------------------------------------------- if(!WriteFile (parallel_port_handle, &g_siIspPins, 1, &dwResult, NULL)) { printf("Could not write to LPT1 (error %d)\n", GetLastError()); Pause(); return; } } If you believe this should be posted on Stack Overflow, please migrate it over (thanks).

    Read the article

  • Problem with apache + ssl: length mismatch error and ocasional bad request

    - by Ruben Garat
    we migrated a server from slicehost to linode recently, we copied the config from one server to the other. Everything works perfectly except that we get: Occasional errors with "Bad Request", this error is not common, you can use it all day and not see it, and the next day it will happen a lot. apart from that, a lot of the time, event though the request works fine we get some errors. using ssldump we get: New TCP connection #1: myip(39831) <-> develserk(443) 1 1 0.2316 (0.2316) C>S SSLv2 compatible client hello Version 3.1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA SSL2_CK_3DES Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f SSL2_CK_RC2 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 SSL2_CK_RC4 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA SSL2_CK_DES TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 SSL2_CK_RC2_EXPORT40 TLS_RSA_EXPORT_WITH_RC4_40_MD5 SSL2_CK_RC4_EXPORT40 1 2 0.2429 (0.0112) S>C Handshake ServerHello Version 3.1 session_id[32]= 9a 1e ae c4 5f df 99 47 97 40 42 71 97 eb b9 14 96 2d 11 ac c0 00 15 67 4e f3 7d 65 4e c4 30 e9 cipherSuite Unknown value 0x39 compressionMethod NULL 1 3 0.2429 (0.0000) S>C Handshake Certificate 1 4 0.2429 (0.0000) S>C Handshake ServerKeyExchange 1 5 0.2429 (0.0000) S>C Handshake ServerHelloDone 1 6 0.4965 (0.2536) C>S Handshake ClientKeyExchange 1 7 0.4965 (0.0000) C>S ChangeCipherSpec 1 8 0.4965 (0.0000) C>S Handshake 1 9 0.5040 (0.0075) S>C ChangeCipherSpec 1 10 0.5040 (0.0000) S>C Handshake ERROR: Length mismatch from the apache error.log [Fri Aug 27 14:50:05 2010] [debug] ssl_engine_io.c(1892): OpenSSL: I/O error, 5 bytes expected to read on BIO#b80c1e70 [mem: b8100918] the server is ubuntu 10.04.1 the apache version is 2.2.14-5ubuntu8 the openssl version is 0.9.8k-7ubuntu8

    Read the article

  • getting a pyserial not loaded error

    - by skinnyTOD
    I'm getting a "pyserial not loaded" error with the python script fragment below (running OSX 10.7.4). I'm trying to run a python app called Myro for controlling the Parallax Scribbler2 robot - figured it would be a fun way to learn a bit of Python - but I'm not getting out of the gate here. I've searched out all the Myro help docs but like a lot in-progress open source programs, they are a moving target and conflicting, out of date, or not very specific about OSX. I have MacPorts installed and installed py27-serial without error. MacPorts lists the python versions I have installed, along with the active version: Available versions for python: none python24 python25 python25-apple python26 python26-apple python27 python27-apple (active) python32 Perhaps stuff is getting installed in the wrong places or my PATH is wrong (I don't much know what I am doing in Terminal and have probably screwed something up). Trying to find out about my sys.path - here's what I get: import sys sys.path ['', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages'] Is that a mess? Can I fix it? Anyway, thanks for reading this far. Here's the python bit that is throwing the error. The error occurs on 'try import serial'. # Global variable robot, to set SerialPort() robot = None pythonVer = "?" rbString = None ptString = None statusText = None # Now, let's import things import urllib import tempfile import os, sys, time try: import serial except: print("WARNING: pyserial not loaded: can't upgrade!") sys.exit() try: input = raw_input # Python 2.x except: pass # Python 3 and better, input is defined try: from tkinter import * pythonver = "3" except: try: from Tkinter import * pythonver = "2" except: pythonver = "?"

    Read the article

  • Installing messaging software displays error 1324 invalid character

    - by llykke
    Trying to install Reuters Messaging software onto a windows 7 pc we receive the error message Error 1324: The folder path 'My Documents' contains an invalid character We've tried installing the application using the local admin account and the user account which is an AD account (roaming?). This user account has administrative rights (i.e. should be allowed to install applications). The users 'My Documents' folder is located on a network drive, where only the user has access. We've tried experimenting with the HKEY_CURRENT_USER \ Software \ Microsoft \ Windows \ CurrentVersion \ Explorer\ User Shell Folders registry entries and setting them to a local position (i.e. C:\Users\Username\Documents) but this didn't resolve the error. We've also tried the following which was taken from a website I can't remember the name of: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem Select the NtfsDisable8dot3NameCreation entry and change the value to 0 Select the Win31FileSystem entry and change the value to 0 which didn't resolve the issue. Edit: This was also an issue when attempting to install the citrix native client necessary to run citrix application (*.ica extension). This made the same error box appear.

    Read the article

  • Puppet Agent fails sporadically, with either timeout or "Could not find class" error

    - by smokris
    I have puppet master running on a Xen dom0, and 3 domUs syncing to it via an hourly crontab puppet agent --test. About 80% of the time, the puppet agent --test completes successfully: info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 5.08 seconds The other 20% of the time, it fails midway, with errors such as the following: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class iptables for test1 at /etc/puppet/manifests/site.pp:1 on node test1 warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test2 info: Applying configuration version '1333319732' notice: Finished catalog run in 24.73 seconds err: Could not send report: Error 500 on SERVER: Internal Server Error private method `gsub' called for WEBrick::HTTPStatus::RequestTimeout:Class WEBrick/1.3.1 (Ruby/1.8.5/2006-08-25) OpenSSL/0.9.8e-rhel5 at puppet:8140 or info: Retrieving plugin err: Could not retrieve catalog from remote server: execution expired warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 9.47 seconds err: Could not send report: Error 408 on SERVER: Request Timeout During this time, I've not made any changes to the Puppet configuration — it just sporadically fails. I'm running puppet-2.7.12 on CentOS, and followed the setup instructions described on http://docs.puppetlabs.com/learning/agent_master_basic.html. Any ideas about how I can troubleshoot this?

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >