Search Results

Search found 127679 results on 5108 pages for 'http status code 503'.

Page 144/5108 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • XHR readyState = 4 but Status = 0 in Google Chrome Browser

    - by Jay
    Hello i'v got a strange Problem with an AJAX call on my site. I make a simple AJAX call to a Script on my site. But the AJAX call fails with readState=4 and Staus = 0. There's no cross domain problem because the script i want to call is on my server. $.ajax({ type:"GET", url: 'http://mydomain.com/test.php', success : function(response){ console.log(response); }, error : function(XHR){ console.log(arguments); } }); I 've googled a lot of sites but there seems to be no solution for that!

    Read the article

  • IIS URL Rewrite HTTP to HTTPS with Port

    - by Andy Arismendi
    My website has two bindings: 1000 and 1443 (port 80/443 are in use by another website on the same IIS instance). Port 1000 is HTTP, port 1443 is HTTPS. What I want to do is redirect any incoming request using "htt p://server:1000" to "htt ps://server:1443". I'm playing around with IIS 7 rewrite module 2.0 but I'm banging my head against the wall. Any insight is appreciated! BTW the rewrite configuration below works great with a site that has an HTTP binding on port 80 and HTTPS binding on port 443, but it doesn't work with my ports. P.S. My URLs intentionally have spaces because the 'spam prevention mechanism' kicked in. For some reason google login doesn't work anymore so I had to create an OpenID account (No Script could be the culprit). I'm not sure how to get XML to display nicely so I added spaces after the opening brackets. < ?xml version="1.0" encoding="utf-8"? < configuration < system.webServer < rewrite < rules < rule name="HTTP to HTTPS redirect" stopProcessing="true" < match url="(.*)" / < conditions trackAllCaptures="true" < add input="{HTTPS}" pattern="off" / < /conditions < action type="Redirect" redirectType="Found" url="htt ps: // {HTTP_HOST}/{R:1}" / < /rule < /rules < /rewrite < /system.webServer < /configuration

    Read the article

  • Plesk + Apache + PHP (FastCGI): Constant session permissions problems, conflicts between HTTP / HTTPS

    - by Hans Engel
    I've just moved a collection of sites over to a brand-new server, running Apache 2.2.3, PHP 5.3, and Plesk 10.1.1. I am having problems with file permissions on PHP sessions, which are being stored in /var/lib/php/session. I originally set the permissions like so for this folder: drwxrwx--- 2 apache psacln 8192 Mar 22 23:25 session This worked fine, for HTTP sessions. Files were being saved in that folder with these permissions: -rw------- 1 client1 psacln 0 Mar 22 23:24 sess_507... -rw------- 1 client2 psacln 0 Mar 22 23:25 sess_8o1... The problem, however, is that PHP scripts accessed via HTTPS do not seem to be run by the same client1 or client2 user. I deleted files in the session directory and accessed a login page via HTTPS to see how sessions were being saved when initiated via this protocol: -rw------- 1 apache apache 0 Mar 22 23:25 sess_507... So, for whatever reason, sessions initiated by clients browsing with HTTPS were being saved by apache:apache, while sessions from HTTP clients were saved with someclient:psacln. What I'd like to ask: How can I avoid this problem with session permissions? When sessions are created via unencrypted HTTP and a client visits an HTTPS portion of the site, permission errors are shown, since apache:apache tries to access the session save created by someclient:psacln. The converse is also true. Can I change the user which runs the Apache HTTPS server, via Plesk or the command line? If not, can I have PHP sessions save with rw-rw---- permissions, and then add apache to the psacln group? Any other suggestions on how to fix this issue?

    Read the article

  • Why is my code signing (MS authenticode) verification failing?

    - by Tim
    I posted this question and have a freshly minted code signing cert from Thawte. I followed the instructions (or so I thought) and the code signing claims to be done right, however when I try to verify the tool shows an error. I have no idea what it means and no idea how to fix this. Any comments would be appreciated. Command line to sign exe: signtool sign /f mdt.pfx /p password /t http://timestamp.verisign.com/scripts/timstamp.dll test.exe Results: The following certificate was selected: Issued to: [my company] Issued by: Thawte Code Signing CA Expires: 4/23/2011 7:59:59 PM SHA1 hash: 7D1A42364765F8969E83BC00AB77F901118F3601 Done Adding Additional Store Attempting to sign: test.exe Successfully signed and timestamped: test.exe Number of files successfully Signed: 1 Number of warnings: 0 Number of errors: 0 Note that there are no errors or warnings. Now, when I try to verify imagine my surprise: signtool verify /v test.exe results in: Verifying: test.exe SHA1 hash of file: 490BA0656517D3A322D19F432F1C6D40695CAD22 Signing Certificate Chain: Issued to: Thawte Premium Server CA Issued by: Thawte Premium Server CA Expires: 12/31/2020 7:59:59 PM SHA1 hash: 627F8D7827656399D27D7F9044C9FEB3F33EFA9A Issued to: Thawte Code Signing CA Issued by: Thawte Premium Server CA Expires: 8/5/2013 7:59:59 PM SHA1 hash: A706BA1ECAB6A2AB18699FC0D7DD8C7DE36F290F Issued to: [my company] Issued by: Thawte Code Signing CA Expires: 4/23/2011 7:59:59 PM SHA1 hash: 7D1A42364765F8969E83BC00AB77F901118F3601 The signature is timestamped: 4/27/2010 10:19:19 AM Timestamp Verified by: Issued to: Thawte Timestamping CA Issued by: Thawte Timestamping CA Expires: 12/31/2020 7:59:59 PM SHA1 hash: BE36A4562FB2EE05DBB3D32323ADF445084ED656 Issued to: VeriSign Time Stamping Services CA Issued by: Thawte Timestamping CA Expires: 12/3/2013 7:59:59 PM SHA1 hash: F46AC0C6EFBB8C6A14F55F09E2D37DF4C0DE012D Issued to: VeriSign Time Stamping Services Signer - G2 Issued by: VeriSign Time Stamping Services CA Expires: 6/14/2012 7:59:59 PM SHA1 hash: ADA8AAA643FF7DC38DD40FA4C97AD559FF4846DE Number of files successfully Verified: 0 Number of warnings: 0 Number of errors: 1

    Read the article

  • Proxy to either Rails app or Node.js app depending on HTTP path w/ Nginx

    - by Cirrostratus
    On Ubuntu 11, I have Nginx correctly serving either CouchDB or Node.js depending on the path, but am unable to get Nginx to access a Rails app via it's port. server { rewrite ^/api(.*)$ $1 last; listen 80; server_name example.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:3005/; } location /ruby { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:9051/; } location /_utils { proxy_pass http://127.0.0.1:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; # buffering would break CouchDB's _changes feed } gzip on; gzip_comp_level 9; gzip_min_length 1400; gzip_types text/plain text/css image/png image/gif image/jpeg application/x-javascript text/xml application/xml application/x ml+rss text/javascript; gzip_vary on; gzip_http_version 1.1; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; } / and /_utils are working bu /ruby gives me a 403 Forbidden

    Read the article

  • Make router forward HTTP and HTTPS traffic to external App

    - by cOsticla
    I use a Linksys WRT54GL router with DD-WRT v24-sp2 (10/10/09) std (SVN revision 13064) which I am trying to make forward all HTTP and HTTPS traffic to an external app called Fiddler (used as proxy) on port 8888. After a lot of digging on this site, dd-wrt forum, dd-wrt.com and WWW, I am stacked with the following piece of code that works (thanks to the guys from dd-wrt support for this info), but only for forwarding HTTP traffic (port 80): #!/bin/sh PROXY_IP=1234567890 PROXY_PORT=8888 LAN_IP=`nvram get lan_ipaddr` LAN_NET=$LAN_IP/`nvram get lan_netmask` iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT I tried to edit the code from above and I came up with the following but it's still not forwarding HTTPS but just HTTP traffic: #!/bin/sh PROXY_IP=1234567890 PROXY_PORT=8888 LAN_IP=`nvram get lan_ipaddr` LAN_NET=$LAN_IP/`nvram get lan_netmask` iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp -m multiport --dports 80,443 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp -m multiport --dports 80,443 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT I am not sure if is possible to forward HTTPS traffic anymore by just using a router so I'd appreciate if somebody will share his thoughts and/or examples regarding this subject here. Thanks!

    Read the article

  • debian VM refusing all traffic apart from http

    - by james lewis
    I've got a VM with a fresh install of Debian (wheezy) and I've installed node and mongo on it. The VM is using a bridged network connection so I was expecting to be able to point my host machines browser at the ip address of the Debian VM (port 1337 for my node example or port 28017 for my mongo status page) and see one of the two services (node or mongo). My requests are refused though. As far as I can tell Debian allows all traffic by default and you have to manually configure iptables to drop traffic. I've checked iptables and it says it's setup to allow anything through. It looks like this: root@devbox:/home/jlewis# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination As a test I setup nginx and I was able to get to the nginx landing page from my host no problems so obviously http traffic is allowed. I then set nginx up to forward all traffic upstream to mongo - no problems there, I was able to see the status page. I then did the same for my example node server and again, no problems. So http traffic is fine, but all other traffic is blocked. Anyone know why debian might be refusing all other traffic other than iptables being setup to drop it? EDIT - output from netstat -nltp: Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:51028 0.0.0.0:* LISTEN 1541/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2462/sshd tcp 0 0 127.0.0.1:1337 0.0.0.0:* LISTEN 2794/node tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2274/exim4 tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1510/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2189/nginx tcp6 0 0 :::22 :::* LISTEN 2462/sshd tcp6 0 0 :::45335 :::* LISTEN 1541/rpc.statd tcp6 0 0 ::1:25 :::* LISTEN 2274/exim4 tcp6 0 0 :::111 :::* LISTEN 1510/rpcbind

    Read the article

  • How can I use Code Contracts in a C++/CLI project?

    - by Daniel Wolf
    I recently stumbled upon Code Contracts and have started using them in my C# projects. However, I also have a number of projects written in C++/CLI. For C# and VB, Code Contracts offer a handy configuration panel in the project properties dialog. For a C++/CLI project, there is no such panel. From the documentation, I got the impression that adding Code Contracts support to a C++/CLI project should be a simple matter of calling some external tools as part of the build process (namely ccrefgen.exe, cccheck.exe, and ccrewrite.exe). However, the number of command line options and restrictions concerning the call sequence have me somewhat intimidated. Can anybody point me to a simple way to run the Code Contracts tools as an automated part of the build process in Visual Studio?

    Read the article

  • is there a way using Ruby's net/http to post form data to an http proxy?

    - by Derek P.
    I have a basic Squid server setup and I am trying to use Ruby's Net::HTTP::Proxy class to send a POST of form data to a specified HTTP endpoint. I assumed I could do the following: Net::HTTP::Proxy(my_host, my_port).start(url.host) do |h| req = Net::HTTP::Post.new(url.path) req.form_data = { "xml" => xml } h.request(req) end But, alas, proxy vs. non-proxied Net::HTTP classes don't seem to use the proxy IP Address. my remote service responds telling me that it received a request from the wrong IP address, ie: not the proxy. I am looking for a specific way to write the procedure, so that I can successfully send a form post via a proxy. Help? :)

    Read the article

  • selecting href not starting with http

    - by sushil bharwani
    using jQuery i am trying to find out all the URLS that user has entered which are not starting with http or https and finally i want to prepend http to all such URLs so that when user clicks on them they are taken to a proper site instead of broken link caused due to entry of URLs without http or https. Also like to mention that User have a field "Websites they Like" where they enter websites of their interest. So if they like stackoverflow, they may end up writing www.stackoverflow.com which will be considered a relative link without http. Also my requirments are such that i cant prompt user to enter http or https before there urls

    Read the article

  • How should I move my code from dev to production?

    - by Teddy
    I have created a PHP web-application. I have 3 environments: DEV, TEST, PROD. What's a good tool / business practice for me to move my PHP web-application code from DEV to TEST to the PROD environment? Realizing that my TEST environment still only connects to my TEST database; whereas, I need to PROD environment to connect to my PROD database. So the code is mostly the same, except that I need to change my TEST code once moved into PROD to connect to the PROD database and not TEST database. I've heard of people taking down Apache in such away that it doesn't allow new connections and once all the existing connections are idle it simply brings down the web server. Then people manually copy the code and then manually update the config files of the PHP application to also point to the PROD instance. That seems terribly dangerous. Does a best practice exists?

    Read the article

  • Apache Reverse proxy Http to https

    - by Coppes
    I have a website which is fully running on Https. For some reason i did get the task to find a way to convert a url for example: http://www.domain.com/a/e-nc/youless to a https version of it, without losing HTTP POST header such as the POST values which are in it. So i thought (not even sure) let's try to make a reversed proxy in apache and see how that works. Anyway after a lot of struggling i came to the point to ask it here. So to be speicific my goal is: Convert the http://www.domain.com/a/e-nc/youless to https://www.domain.com/a/e-nc/youless without losing the POST conditions. What i have tried until now is the following: Created a file called: proxiedhosts in my apache2/sites-enabled folder with the following contents: SSLProxyEngine On SSLProxyCACertificateFile /etc/apache2/ssl/certificate****.pem ProxyRequests Off ProxyPreserveHost On <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ ProxyPassReverse /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ Thanks in advance!

    Read the article

  • HTTPS/HTTP redirects via .htaccess

    - by Winston
    I have a somehow complicated problem I am trying to solve. I've used the following .htaccess directive to enable some sort of Pretty URLs, and that worked fine. For example, http://myurl.com/shop would be redirected to http://myurl.com/index.php/shop, and that was well working (note that stuff such as myurl.com/css/mycss.css) does not get redirected: RewriteEngine on RewriteCond ${REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] But now, as I have introduced SSL to my webpage, I want the following behaviour: I basically want the above behaviour for all pages except admin.php and login.php. Requests to those two pages should be redirected to the HTTPS part, whereas all other requests should be processed as specified above. I have come up with the following .htaccess, but it does not work. h*tps://myurl.com/shop does not get redirected to h*tp://myurl.com/index.php/shop, and h*tp://myurl.com/admin.php does not get redirected to h*tps://myurl.com/admin.php. RewriteEngine on RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^(admin\.php$|login\.php$) RewriteRule ^(.*)$ http://%{HTTP_HOST}/${REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^(admin\.php$|login\.php$) RewriteRule ^(.*)$ https://myurl.com/%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] I know it has something to do with rules overwriting each other, but I am not sure since my knowledge of Apache is quite limited. How could I fix this apparently not that difficult problem, and how could I make my .htaccess more compact and elegant? Help is very much appreciated, thank you!

    Read the article

  • PHP: How should I move my code from dev to production?

    - by Teddy
    I have created a PHP web-application. I have 3 environments: DEV, TEST, PROD. What's a good tool / business practice for me to move my PHP web-application code from DEV to TEST to the PROD environment? Realizing that my TEST environment still only connects to my TEST database; whereas, I need to PROD environment to connect to my PROD database. So the code is mostly the same, except that I need to change my TEST code once moved into PROD to connect to the PROD database and not TEST database. I've heard of people taking down Apache in such away that it doesn't allow new connections and once all the existing connections are idle it simply brings down the web server. Then people manually copy the code and then manually update the config files of the PHP application to also point to the PROD instance. That seems terribly dangerous. Does a best practice exists?

    Read the article

  • Can I make Visual Studio's code completion window more like Eclipse (Java)?

    - by Matt
    Is it possible to make Visual Studio 2010's code completion window more like that of Eclipse (Java)? In particular, I'd love the code completion window to give me a variable's type, and a method's return type and expected parameters, without needing to hover the highlight over that particular variable/method. VS's code completion's little icons that indicate if something is a property, method etc are useful, but they just aren't enough.

    Read the article

  • Is it impossible to secure .net code (intellectual property) ?

    - by JL
    I used to work in JavaScript a lot and one thing that really bothered my employers was that the source code was too easy to steal. Even with obfuscation, nothing really helped, because we all knew that any competent developer would be able to read that code if they wanted to. JS Scripts are one thing, but what about SOA projects that have millions invested in IP (Intellectual Property). I love .net, and especially C#, but I recently again had to answer the question "If we give this compiled program over to our clients, can their developers reverse engineer it?" I had gone out of my way to obfuscate the code, but I knew it wouldn't take that much for another determined C# developer to get at the code. So I earnestly pose the question, is it impossible to secure .net code? The considerations I have as as follows: Even regular native executables can be reversed, but not every developer has the skill to be able to do this. Its a lot harder to disassemble a native executable than a .net assembly. Obfuscation will only get you so far, but it does help a little. Why have I never seen any public acknowledgement by Microsoft that anything written in .net is subject to relatively easy IP theft? Why have I never seen a scrap of counter measure training on any Microsoft site? Why does VS come with a community obfuscater as an optional component? Ok maybe I have just had my head in the sand here, but its not exactly high on most developers priority list. Are there any plans to address my concerns in any future version of .net? I'm not knocking .net, but I would like some realistic answers, thank you, question marked as subjective and community!

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • Google code: what kind of license should I use?

    - by Dran Dane
    Hello I would like to store my code on a repository such as Google Code. What kind of license should I use? I don't want to block me. My code may be open source and reused by others but I want to be able to reuse it in any open source project or not, and gainful project or not. Thank you.

    Read the article

  • Problem Rewriting URL's from HTTPS to HTTP using IIS7 URL Rewriter, when using Webforms ReturnURL=

    - by theminesgreg
    I took Jeff's Re-write rules from this post and the HTTP to HTTPS conversion works great. However, going back to HTTP is giving me problems because of the ReturnUrl= in the URL (I'm using webforms). Here's an example of the url: https://localhost/Login.aspx?ReturnUrl=%2f Here's the rewrite rule I'm using: <rule name="HTTPS to HTTP redirect for all other pages" stopProcessing="true"> <match url="^login\.aspx$" ignoreCase="true" negate="true" /> <conditions> <add input="{SERVER_PORT}" pattern="^443$" /> </conditions> <action type="Redirect" redirectType="Found" url="http://{HTTP_HOST}{REQUEST_URI}" /> </rule> Here's the resulting re-written URL: http://localhost/,/ Has anyone found a work around for this?

    Read the article

  • MBR status confusion

    - by Ahmed Ghoneim
    EB 58 90 6D 6B 64 6F 73 66 73 00 00 02 08 20 00 02 00 00 00 00 F8 00 00 3E 00 83 00 00 00 00 00 94 88 7E 00 98 1F 00 00 00 00 00 00 02 00 00 00 01 00 06 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 29 A9 38 B1 34 57 61 76 65 20 20 20 20 20 20 20 46 41 54 33 32 20 20 20 0E 1F BE 77 7C AC 22 C0 74 0B 56 B4 0E BB 07 00 CD 10 5E EB F0 32 E4 CD 16 CD 19 EB FE 54 68 69 73 20 69 73 20 6E 6F 74 20 61 20 62 6F 6F 74 61 62 6C 65 20 64 69 73 6B 2E 20 20 50 6C 65 61 73 65 20 69 6E 73 65 72 74 20 61 20 62 6F 6F 74 61 62 6C 65 20 66 6C 6F 70 70 79 20 61 6E 64 0D 0A 70 72 65 73 73 20 61 6E 79 20 6B 65 79 20 74 6F 20 74 72 79 20 61 67 61 69 6E 20 2E 2E 2E 20 0D 0A 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 AA Learning disk records, this is my USB MBR record viewed by bless on ubuntu formatted with disk utility as MBR table and FAT partition, referring to this Wiki of first record status (0x80 = bootable (active), 0x00 = non-bootable, other = invalid ) but my MBR shows first offset as EB. What's this record stands for ? also, can you provide me with good tables/images tutorials for MBR and other disks' records :)

    Read the article

  • Is it rude to add "TODO: wtf?" in source code?

    - by mafutrct
    I encountered something ... well, you know TDWTF... something like that in an international project I'm working on. The code was written by a team mate. For a second I was tempted to add // TODO: wtf? to the infringing code but restrained myself. The project is indeed on a professional level, but for internal conversation, more colloquial language is used - but still no "bad" words as in "wtf". Usually, I'd surely not add such a comment, but I believe there are a few factors that allow consideration still: 1. It is not visible except as a comment in the source code (of course). 2. It is internal to our team - other developers may happen see it but it is not their code. 3. Comments in source code are usually accepted to be more colloquial, since it is "kept between us developers". Would you totally advise to never add such a comment? Or do you regard it as an edge case? Did you possibly add something similar yourself?

    Read the article

  • Is there a search engine that indexes source code of a web-page?

    - by Dexter
    I need to search the web for sites that are in our industry that use the same Adwords management company, to ensure that the said company is not violating our contract, as they have been accused of doing. They use a tracking code in the template of every page which has a certain domain in the URL, and I'm wondering if it's possible "Google" the source code using some bot that crawls the code rather than the content? For example, I bought an unlimited license for an image gallery, and I was asked to type the license number in a comment just before the script. I thought it was just so a human could look at the source and find out if someone paid, but it turned out that it was actually that they had a crawler looking for their source code and that comment. If it ran across the code on your site, it would look for the comment, and if it found one, it would check to see if it was an existing one. If not, it would first notify you of your noncompliance, and then notify the owner of the script. Edit: I'm looking to index HTML and JavaScript only, not the server-side languages or Java.

    Read the article

  • Code Contracts: Do we have to specify Contract.Requires(...) statements redundantly in delegating me

    - by herzmeister der welten
    I'm intending to use the new .NET 4 Code Contracts feature for future development. This made me wonder if we have to specify equivalent Contract.Requires(...) statements redundantly in a chain of methods. I think a code example is worth a thousand words: public bool CrushGodzilla(string weapon, int velocity) { Contract.Requires(weapon != null); // long code return false; } public bool CrushGodzilla(string weapon) { Contract.Requires(weapon != null); // specify contract requirement here // as well??? return this.CrushGodzilla(weapon, int.MaxValue); } For runtime checking it doesn't matter much, as we will eventually always hit the requirement check, and we will get an error if it fails. However, is it considered bad practice when we don't specify the contract requirement here in the second overload again? Also, there will be the feature of compile time checking, and possibly also design time checking of code contracts. It seems it's not yet available for C# in Visual Studio 2010, but I think there are some languages like Spec# that already do. These engines will probably give us hints when we write code to call such a method and our argument currently can or will be null. So I wonder if these engines will always analyze a call stack until they find a method with a contract that is currently not satisfied? Furthermore, here I learned about the difference between Contract.Requires(...) and Contract.Assume(...). I suppose that difference is also to consider in the context of this question then?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >