Search Results

Search found 15966 results on 639 pages for 'connection'.

Page 636/639 | < Previous Page | 632 633 634 635 636 637 638 639  | Next Page >

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • Network throughput issue (ARP-related)

    - by Joel Coel
    The small college where I work is having some very strange network issues. I'm looking for any advice or ideas here. We were fine over the summer, but the trouble began few days after students returned to campus in force for the fall term. Symptoms The main symptom is that internet access will work, but it's very slow... often to the point of timeouts. As an example, a typical result from Speedtest.net will return .4Mbps download, but allow 3 to 8 Mbps upload speed. Lesser symptoms may include severely limited performance transferring data to and from our file server, or even in some cases the inability to log in to the computer (cannot reach the domain controller). The issue crosses multiple vlans, and has effected devices on nearly every vlan we operate. The issue does not impact all machines on the network. An unaffected machine will typically see at least 11Mbps download from speedtest.net, and perhaps much more depending on larger campus traffic patterns at the time. There is one variation on the larger issue. We have one vlan where users were unable to log into nearly all of the machines at all. IT staff would log in using a local administrator account (or in some cases cached credentials), and from there a release/renew or pinging the gateway would allow the machine to work... for a while. Complicating this issue is that this vlan covers our computer labs, which use software called Deep Freeze to completely reset the hard drives after a reboot. It could just the same issue manifesting differently because of stale data on machines that have not permanently altered low-level info for weeks. We were able to solve this, however, by creating a new vlan and moving the labs over to the new vlan wholesale. Instigations Eventually we noticed that the effected machines all had recent dhcp leases. We can predict when a machine will become "slow" by watching when a dhcp lease comes up for renewal. We played with setting the lease time very short for a test vlan, but all that did was remove our ability to predict when the machine would become slow. Machines with static IPs have pretty much always worked normally. Manually releasing/renewing an address will never cause a machine to become slow. In fact, in some cases this process has fixed a machine in that state. Most of the time, though, it doesn't help. We also noticed that mobile machines like laptops are likely to become slow when they cross to new vlans. Wireless on campus is divided up into "zones", where each zone maps to a small set of buildings. Moving to a new building can place you in a zone, thereby causing you to get a new address. A machine resuming from sleep mode is also very likely to be slow. Mitigations Sometimes, but not always, clearing the arp cache on an effected machine will allow it to work normally again. As already mentioned, releasing/renewing a local machine's IP address can fix that machine, but it's not guaranteed. Pinging the default gateway can also sometimes help with a slow machine. What seems to help most to mitigate the issue is clearing the arp cache on our core layer-3 switch. This switch is used for our dhcp system as the default gateway on all vlans, and it handles inter-vlan routing. The model is a 3Com 4900SX. To try to mitigate the issue, we have the cache timeout set on the switch all the way down to the lowest possible time, but it hasn't helped. I also put together a script that runs every few minutes to automatically connect to the switch and reset the cache. Unfortunately, this does not always work, and can even cause some machines to end up in the slow state for a short time (though these seem to correct themselves after a few minutes). We currently have a scheduled job that runs every 10 minutes to force the core switch to clear it's ARP cache, but this is far from perfect or desirable. Reproduction We now have a test machine that we can force into the slow state at will. It is connected to a switch with ports set up for each of our vlans. We make the machine slow by connecting to different vlans, and after a new connection or two it will be slow. It's also worth noting in this section that this has happened before at the start of prior terms, but in the past the problem has gone away on it's own after a few days. It solved itself before we had a chance to do much diagnostic work... hence why we've allowed it to drag so long into the term this time 'round; the expectation was this would be a short-lived situation. Other Factors It's worth mentioning that we have had about half a dozen switches just outright fail over the last year. These are mainly 2003/2004-era 3Coms (mostly 4200's) that were all put in at about the same time. They should still be covered under warranty, buy HP has made getting service somewhat difficult. Mostly in power supplies that have failed, but in a couple cases we have used a power supply from a switch with a failed mainboard to bring a switch with a failed power supply back to life. We do have UPS devices on all but three of four switches now, but that was not the case when I started two and a half years ago. Severe budget constraints (we were on the Dept. of Ed's financially challenged institutions list a couple years back) have forced me to look to the likes of Netgear and TrendNet for replacements, but so far these low-end models seem to be holding their own. It's also worth mentioning that the big change on our network this summer was migrating from a single cross-campus wireless SSID to the zoned approach mentioned earlier. I don't think this is the source of the issue, as like I've said: we've seen this before. However, it's possible this is exacerbating the issue, and may be much of the reason it's been so hard to isolate. Diagnosis At first it seemed clear to us, given the timing and persistent nature of the problem, that the source of the issue was an infected (or malicious) student machine doing ARP cache poisoning. However, repeated attempts to isolate the source have failed. Those attempts include numerous wireshark packet traces, and even taking entire buildings offline for brief periods. We have not been able even to find a smoking gun bad ARP entry. My current best guess is an overloaded or failing core switch, but I'm not sure on how to test for this, and the cost of replacing it blindly is steep. Again, any ideas appreciated.

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Why i disconnect every few seconds? using USB wireless adapter

    - by Rev3rse
    i know it's for ubuntu questions..but mint and ubuntu are very similiar and i had the same problem with linux ubuntu too..so i think this is the right place for my question anyway i don't have experience with drivers and other things,after installing Linux on my machine( i did dist-upgrade btw) everything seem to be great because i didn't have to install any driver, after a while i realized that my connection stop after few minutes(actually it shows that I'm connected but it's not) so i have to reconnect and after few minutes it disconnect again. I'm using Alfa USB wireless adapter AWS036H, and my Linux version is 11 i think the driver i'm using is Realtek i searched in the Internet and i found nothing. these are some outputs of few things people usually ask for: Note: I'm NOT using a laptop. dmsg: [19445.604448] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.174.220.77 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=104 ID=10466 DF PROTO=TCP SPT=55150 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19448.164050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=41982 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=7566 DF PROTO=TCP INCOMPLETE [8 bytes] ] [19465.079565] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.128.216.31 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=5100 DF PROTO=TCP SPT=50169 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19486.270328] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.130.13.122 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=22207 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19497.480522] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [19497.593276] cfg80211: All devices are disconnected, going to restore regulatory settings [19497.593282] cfg80211: Restoring regulatory settings [19497.593346] cfg80211: Calling CRDA to update world regulatory domain [19497.638740] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [19497.638745] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638749] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [19497.638753] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638756] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [19497.638760] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638763] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [19497.638766] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638770] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [19497.638773] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638776] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [19497.638780] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638783] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [19497.638787] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638790] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [19497.638794] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638797] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [19497.638801] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638804] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [19497.638807] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638811] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [19497.638814] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638817] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [19497.638821] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638824] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [19497.638828] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638831] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [19497.638835] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638838] cfg80211: World regulatory domain updated: [19497.638841] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [19497.638845] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638848] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638852] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638855] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638859] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19513.145150] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [19513.146910] wlan0: authenticated [19513.252775] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [19513.255149] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [19513.255154] wlan0: associated [19515.675091] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=42720 DF PROTO=TCP SPT=1945 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19525.684312] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=49890 DF PROTO=TCP SPT=53401 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19551.856766] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=85.228.39.93 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=103 ID=1162 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19564.623005] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.202.21.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=17881 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19584.855364] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.49.151.87 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=31716 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19604.688647] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.225.124.155 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=6656 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19626.362529] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.184.50.41 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=23241 DF PROTO=TCP SPT=1416 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19645.040906] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=92.250.245.244 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=50061 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19665.212659] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.183.3.18 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=111 ID=1689 DF PROTO=TCP SPT=62817 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19685.036415] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=50638 DF PROTO=TCP SPT=49624 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19705.487915] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.122.17.82 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=112 ID=19070 DF PROTO=TCP SPT=54795 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19726.779185] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.88.116.239 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=32168 DF PROTO=TCP SPT=57330 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19744.755673] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.124.5.43 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=2288 DF PROTO=TCP SPT=6475 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19764.449183] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.216.35.19 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4281 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19784.456189] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.82.25.149 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=1866 DF PROTO=TCP SPT=59507 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19804.836687] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.56.199.3 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=14749 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19824.812685] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=186.28.7.159 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=44686 PROTO=UDP SPT=23418 DPT=6881 LEN=28 [19847.683314] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=63046 DF PROTO=TCP SPT=52192 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19884.711455] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=27914 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19884.983589] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.107.130.61 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=7742 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19905.681078] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=95.21.11.121 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=31775 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19926.035707] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.76.132.55 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=28140 DF PROTO=TCP SPT=51905 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19945.668326] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=188.92.0.197 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=7865 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19967.200339] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.102.172 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=105 ID=28408 DF PROTO=TCP SPT=63505 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19999.752732] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.166.171.200 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=36405 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20007.928719] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.235.59.16 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=46415 DF PROTO=TCP SPT=4537 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [20026.181726] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.182.169.36 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=106 ID=25126 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20048.845358] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.66.118.104 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=111 ID=18068 DF PROTO=TCP SPT=49928 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20064.341857] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=77.2.63.153 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=7242 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20090.093490] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=93.16.17.210 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=894 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20104.443995] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=89.83.235.99 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=17295 DF PROTO=TCP SPT=58979 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20128.625374] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.62.91.79 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=21793 DF PROTO=TCP SPT=51446 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20151.055506] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.135.217.213 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=112 ID=32452 DF PROTO=TCP SPT=55136 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20164.618874] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=47784 DF PROTO=TCP SPT=2422 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20184.337745] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.212.71 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=14544 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20205.007512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.62.158.247 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=21562 DF PROTO=TCP SPT=3933 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20225.204018] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=113 ID=15045 DF PROTO=TCP SPT=49630 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20244.842290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=82.82.190.168 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=23741 DF PROTO=TCP SPT=50766 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20266.701649] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=88.153.108.124 DST=192.168.1.6 LEN=48 TOS=0x02 PREC=0x00 TTL=111 ID=206 DF PROTO=TCP SPT=2451 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20286.305414] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.240.86.73 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=107 ID=325 DF PROTO=TCP SPT=65184 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20294.293989] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43133 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56899 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297015] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43134 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12080 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297242] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43135 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25195 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20295.478338] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [20295.552735] cfg80211: All devices are disconnected, going to restore regulatory settings [20295.552742] cfg80211: Restoring regulatory settings [20295.552748] cfg80211: Calling CRDA to update world regulatory domain [20295.680635] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [20295.680641] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680644] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [20295.680648] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680652] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [20295.680655] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680658] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [20295.680662] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680665] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [20295.680669] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680672] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [20295.680676] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680679] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [20295.680683] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680687] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [20295.680690] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680693] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [20295.680697] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680700] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [20295.680704] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680708] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [20295.680711] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680715] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [20295.680718] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680722] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [20295.680725] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680728] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [20295.680732] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680736] cfg80211: World regulatory domain updated: [20295.680738] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [20295.680742] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680745] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680749] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680752] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680756] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20306.009341] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [20306.011225] wlan0: authenticated [20306.118095] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [20306.120963] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [20306.120967] wlan0: associated [20307.364427] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.91.101.130 DST=192.168.1.6 LEN=64 TOS=0x00 PREC=0x00 TTL=49 ID=36839 DF PROTO=TCP SPT=62492 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20310.914290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43180 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56900 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.936634] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43181 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12081 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.939017] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43182 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25196 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20325.941050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.118.78.99 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4407 PROTO=UDP SPT=2970 DPT=6881 LEN=28 [20328.801724] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43196 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56901 DF PROTO=TCP INCOMPLETE [8 bytes] ] ... inxi -N Network: Card-1 Realtek RTL8101E/RTL8102E PCI Express Fast Ethernet controller driver r8169 Card-2 Realtek RTL-8139/8139C/8139C+ driver 8139too /usr/lib/linuxmint/mintWifi/mintWifi.py ------------------------- * I. scanning WIFI PCI devices... ------------------------- * II. querying ndiswrapper... ------------------------- * III. querying iwconfig... lo no wireless extensions. eth0 no wireless extensions. eth1 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"Home" Mode:Managed Frequency:2.437 GHz Access Point: 00:24:C8:4B:46:E0 Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:1132 Missed beacon:0 ------------------------- * IV. querying ifconfig... eth0 Link encap:Ethernet HWaddr 00:1f:d0:c9:b8:8e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x4000 eth1 Link encap:Ethernet HWaddr 00:0e:2e:77:88:16 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0xd000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10696 errors:0 dropped:0 overruns:0 frame:0 TX packets:10696 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3823011 (3.8 MB) TX bytes:3823011 (3.8 MB) wlan0 Link encap:Ethernet HWaddr 00:c0:ca:44:62:d1 inet addr:192.168.1.6 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::2c0:caff:fe44:62d1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:90424 errors:0 dropped:0 overruns:0 frame:0 TX packets:65201 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:98024465 (98.0 MB) TX bytes:10345450 (10.3 MB) ------------------------- * V. querying DHCP... lspci 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce 9400 GT] (rev a1) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) lsmod Module Size Used by ipt_REJECT 12512 1 ipt_LOG 12784 5 xt_limit 12541 7 xt_tcpudp 12531 8 ipt_addrtype 12535 4 xt_state 12514 7 ip6table_filter 12711 1 ip6_tables 22545 1 ip6table_filter nf_nat_irc 12542 0 nf_conntrack_irc 13138 1 nf_nat_irc nf_nat_ftp 12548 0 nf_nat 24827 2 nf_nat_irc,nf_nat_ftp nf_conntrack_ipv4 19024 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13106 1 nf_nat_ftp nf_conntrack 69744 7 xt_state,nf_nat_irc,nf_conntrack_irc,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18125 1 iptable_filter x_tables 21907 10 ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,ipt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables nls_utf8 12493 10 udf 83795 1 crc_itu_t 12627 1 udf usb_storage 43946 1 uas 17676 0 snd_seq_dummy 12686 0 cryptd 19801 0 aes_i586 16956 1 aes_generic 38023 1 aes_i586 binfmt_misc 13213 1 dm_crypt 22463 0 vesafb 13449 1 nvidia 9766978 44 arc4 12473 2 rtl8187 56206 0 mac80211 257001 1 rtl8187 cfg80211 156212 2 rtl8187,mac80211 ppdev 12849 0 snd_hda_codec_realtek 255882 1 parport_pc 32111 1 psmouse 73312 0 eeprom_93cx6 12653 1 rtl8187 snd_hda_intel 24113 5 snd_hda_codec 90901 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13274 1 snd_hda_codec snd_pcm 80042 3 snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25269 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 3 snd_seq_dummy,snd_seq_midi,snd_seq_midi_event snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 4 snd_seq_dummy,snd_seq_midi,snd_rawmidi,snd_seq joydev 17322 0 snd 55295 18 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device serio_raw 12990 0 soundcore 12600 1 snd snd_page_alloc 14073 2 snd_hda_intel,snd_pcm lp 13349 0 parport 36746 3 ppdev,parport_pc,lp usbhid 41704 0 hid 77084 1 usbhid dm_raid45 88410 0 xor 21860 1 dm_raid45 btrfs 527388 0 zlib_deflate 26594 1 btrfs libcrc32c 12543 1 btrfs 8139too 23208 0 8139cp 22497 0 r8169 42534 0 floppy 60032 0

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • help with fixing fwts errors log

    - by jasmines
    Here is an extract of results.log: MTRR validation. Test 1 of 3: Validate the kernel MTRR IOMEM setup. FAILED [MEDIUM] MTRRIncorrectAttr: Test 1, Memory range 0xc0000000 to 0xdfffffff (PCI Bus 0000:00) has incorrect attribute Write-Combining. FAILED [MEDIUM] MTRRIncorrectAttr: Test 1, Memory range 0xfee01000 to 0xffffffff (PCI Bus 0000:00) has incorrect attribute Write-Protect. ==================================================================================================== Test 1 of 1: Kernel log error check. Kernel message: [ 0.208079] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored ADVICE: This is not exactly a failure mode but a warning from the kernel. The _OSI() method has implemented a match to the 'Linux' query in the DSDT and this is redundant because the ACPI driver matches onto the Windows _OSI strings by default. FAILED [HIGH] KlogACPIErrorMethodExecutionParse: Test 1, HIGH Kernel message: [ 3.512783] ACPI Error : Method parse/execution failed [\_SB_.PCI0.GFX0._DOD] (Node f7425858), AE_AML_PACKAGE_LIMIT (20110623/psparse-536) ADVICE: This is a bug picked up by the kernel, but as yet, the firmware test suite has no diagnostic advice for this particular problem. Found 1 unique errors in kernel log. ==================================================================================================== Check if system is using latest microcode. ---------------------------------------------------------------------------------------------------- Cannot read microcode file /usr/share/misc/intel-microcode.dat. Aborted test, initialisation failed. ==================================================================================================== MSR register tests. FAILED [MEDIUM] MSRCPUsInconsistent: Test 1, MSR SYSENTER_ESP (0x175) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0xffffffffffffffff). MSR CPU 0 -> 0xf7bb9c40 vs CPU 1 -> 0xf7bc7c40 FAILED [MEDIUM] MSRCPUsInconsistent: Test 1, MSR MISC_ENABLE (0x1a0) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0x400c51889). MSR CPU 0 -> 0x850088 vs CPU 1 -> 0x850089 ==================================================================================================== Checks firmware has set PCI Express MaxReadReq to a higher value on non-motherboard devices. ---------------------------------------------------------------------------------------------------- Test 1 of 1: Check firmware settings MaxReadReq for PCI Express devices. MaxReadReq for pci://00:00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) is low (128) [Audio device]. MaxReadReq for pci://00:02:00.0 Network controller: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection is low (128) [Network controller]. FAILED [LOW] LowMaxReadReq: Test 1, 2 devices have low MaxReadReq settings. Firmware may have configured these too low. ADVICE: The MaxReadRequest size is set too low and will affect performance. It will provide excellent bus sharing at the cost of bus data transfer rates. Although not a critical issue, it may be worth considering setting the MaxReadRequest size to 256 or 512 to increase throughput on the PCI Express bus. Some drivers (for example the Brocade Fibre Channel driver) allow one to override the firmware settings. Where possible, this BIOS configuration setting is worth increasing it a little more for better performance at a small reduction of bus sharing. ==================================================================================================== PCIe ASPM check. ---------------------------------------------------------------------------------------------------- Test 1 of 2: PCIe ASPM ACPI test. PCIE ASPM is not controlled by Linux kernel. ADVICE: BIOS reports that Linux kernel should not modify ASPM settings that BIOS configured. It can be intentional because hardware vendors identified some capability bugs between the motherboard and the add-on cards. Test 2 of 2: PCIe ASPM registers test. WARNING: Test 2, RP 00h:1Ch.01h L0s not enabled. WARNING: Test 2, RP 00h:1Ch.01h L1 not enabled. WARNING: Test 2, Device 02h:00h.00h L0s not enabled. WARNING: Test 2, Device 02h:00h.00h L1 not enabled. PASSED: Test 2, PCIE aspm setting matched was matched. WARNING: Test 2, RP 00h:1Ch.05h L0s not enabled. WARNING: Test 2, RP 00h:1Ch.05h L1 not enabled. WARNING: Test 2, Device 85h:00h.00h L0s not enabled. WARNING: Test 2, Device 85h:00h.00h L1 not enabled. PASSED: Test 2, PCIE aspm setting matched was matched. ==================================================================================================== Extract and analyse Windows Management Instrumentation (WMI). Test 1 of 2: Check Windows Management Instrumentation in DSDT Found WMI Method WMAA with GUID: 5FB7F034-2C63-45E9-BE91-3D44E2C707E4, Instance 0x01 Found WMI Event, Notifier ID: 0x80, GUID: 95F24279-4D7B-4334-9387-ACCDC67EF61C, Instance 0x01 PASSED: Test 1, GUID 95F24279-4D7B-4334-9387-ACCDC67EF61C is handled by driver hp-wmi (Vendor: HP). Found WMI Event, Notifier ID: 0xa0, GUID: 2B814318-4BE8-4707-9D84-A190A859B5D0, Instance 0x01 FAILED [MEDIUM] WMIUnknownGUID: Test 1, GUID 2B814318-4BE8-4707-9D84-A190A859B5D0 is unknown to the kernel, a driver may need to be implemented for this GUID. ADVICE: A WMI driver probably needs to be written for this event. It can checked for using: wmi_has_guid("2B814318-4BE8-4707-9D84-A190A859B5D0"). One can install a notify handler using wmi_install_notify_handler("2B814318-4BE8-4707-9D84-A190A859B5D0", handler, NULL). http://lwn.net/Articles/391230 describes how to write an appropriate driver. Found WMI Object, Object ID AB, GUID: 05901221-D566-11D1-B2F0-00A0C9062910, Instance 0x01, Flags: 00 Found WMI Method WMBA with GUID: 1F4C91EB-DC5C-460B-951D-C7CB9B4B8D5E, Instance 0x01 Found WMI Object, Object ID BC, GUID: 2D114B49-2DFB-4130-B8FE-4A3C09E75133, Instance 0x7f, Flags: 00 Found WMI Object, Object ID BD, GUID: 988D08E3-68F4-4C35-AF3E-6A1B8106F83C, Instance 0x19, Flags: 00 Found WMI Object, Object ID BE, GUID: 14EA9746-CE1F-4098-A0E0-7045CB4DA745, Instance 0x01, Flags: 00 Found WMI Object, Object ID BF, GUID: 322F2028-0F84-4901-988E-015176049E2D, Instance 0x01, Flags: 00 Found WMI Object, Object ID BG, GUID: 8232DE3D-663D-4327-A8F4-E293ADB9BF05, Instance 0x01, Flags: 00 Found WMI Object, Object ID BH, GUID: 8F1F6436-9F42-42C8-BADC-0E9424F20C9A, Instance 0x00, Flags: 00 Found WMI Object, Object ID BI, GUID: 8F1F6435-9F42-42C8-BADC-0E9424F20C9A, Instance 0x00, Flags: 00 Found WMI Method WMAC with GUID: 7391A661-223A-47DB-A77A-7BE84C60822D, Instance 0x01 Found WMI Object, Object ID BJ, GUID: DF4E63B6-3BBC-4858-9737-C74F82F821F3, Instance 0x05, Flags: 00 ==================================================================================================== Disassemble DSDT to check for _OSI("Linux"). ---------------------------------------------------------------------------------------------------- Test 1 of 1: Disassemble DSDT to check for _OSI("Linux"). This is not strictly a failure mode, it just alerts one that this has been defined in the DSDT and probably should be avoided since the Linux ACPI driver matches onto the Windows _OSI strings { If (_OSI ("Linux")) { Store (0x03E8, OSYS) } If (_OSI ("Windows 2001")) { Store (0x07D1, OSYS) } If (_OSI ("Windows 2001 SP1")) { Store (0x07D1, OSYS) } If (_OSI ("Windows 2001 SP2")) { Store (0x07D2, OSYS) } If (_OSI ("Windows 2006")) { Store (0x07D6, OSYS) } If (LAnd (MPEN, LEqual (OSYS, 0x07D1))) { TRAP (0x01, 0x48) } TRAP (0x03, 0x35) } WARNING: Test 1, DSDT implements a deprecated _OSI("Linux") test. ==================================================================================================== 0 passed, 0 failed, 1 warnings, 0 aborted, 0 skipped, 0 info only. ==================================================================================================== ACPI DSDT Method Semantic Tests. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP Failed to install global event handler. Test 22 of 93: Check _PSR (Power Source). ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 22, Detected an infinite loop when evaluating method '\_SB_.AC__._PSR'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 22, \_SB_.AC__._PSR correctly acquired and released locks 16 times. Test 35 of 93: Check _TMP (Thermal Zone Current Temp). ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.DTSZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.DTSZ._TMP correctly acquired and released locks 14 times. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.CPUZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.CPUZ._TMP correctly acquired and released locks 10 times. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 35, Detected an infinite loop when evaluating method '\_TZ_.SKNZ._TMP'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. PASSED: Test 35, \_TZ_.SKNZ._TMP correctly acquired and released locks 10 times. PASSED: Test 35, _TMP correctly returned sane looking value 0x00000b4c (289.2 degrees K) PASSED: Test 35, \_TZ_.BATZ._TMP correctly acquired and released locks 9 times. PASSED: Test 35, _TMP correctly returned sane looking value 0x00000aac (273.2 degrees K) PASSED: Test 35, \_TZ_.FDTZ._TMP correctly acquired and released locks 7 times. Test 46 of 93: Check _DIS (Disable). FAILED [MEDIUM] MethodShouldReturnNothing: Test 46, \_SB_.PCI0.LPCB.SIO_.COM1._DIS returned values, but was expected to return nothing. Object returned: INTEGER: 0x00000000 ADVICE: This probably won't cause any errors, but it should be fixed as the AML code is not conforming to the expected behaviour as described in the ACPI specification. FAILED [MEDIUM] MethodShouldReturnNothing: Test 46, \_SB_.PCI0.LPCB.SIO_.LPT0._DIS returned values, but was expected to return nothing. Object returned: INTEGER: 0x00000000 ADVICE: This probably won't cause any errors, but it should be fixed as the AML code is not conforming to the expected behaviour as described in the ACPI specification. Test 61 of 93: Check _WAK (System Wake). Test _WAK(1) System Wake, State S1. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(2) System Wake, State S2. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(3) System Wake, State S3. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(4) System Wake, State S4. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test _WAK(5) System Wake, State S5. ACPICA Exception AE_AML_INFINITE_LOOP during execution of method COMP WARNING: Test 61, Detected an infinite loop when evaluating method '\_WAK'. ADVICE: This may occur because we are emulating the execution in this test environment and cannot handshake with the embedded controller or jump to the BIOS via SMIs. However, the fact that AML code spins forever means that lockup conditions are not being checked for in the AML bytecode. Test 87 of 93: Check _BCL (Query List of Brightness Control Levels Supported). Package has 2 elements: 00: INTEGER: 0x00000000 01: INTEGER: 0x00000000 FAILED [MEDIUM] Method_BCLElementCount: Test 87, Method _BCL should return a package of more than 2 integers, got just 2. Test 88 of 93: Check _BCM (Set Brightness Level). ACPICA Exception AE_AML_PACKAGE_LIMIT during execution of method _BCM FAILED [CRITICAL] AEAMLPackgeLimit: Test 88, Detected error 'Package limit' when evaluating '\_SB_.PCI0.GFX0.DD02._BCM'. ==================================================================================================== ACPI table settings sanity checks. ---------------------------------------------------------------------------------------------------- Test 1 of 1: Check ACPI tables. PASSED: Test 1, Table APIC passed. Table ECDT not present to check. FAILED [MEDIUM] FADT32And64BothDefined: Test 1, FADT 32 bit FIRMWARE_CONTROL is non-zero, and X_FIRMWARE_CONTROL is also non-zero. Section 5.2.9 of the ACPI specification states that if the FIRMWARE_CONTROL is non-zero then X_FIRMWARE_CONTROL must be set to zero. ADVICE: The FADT FIRMWARE_CTRL is a 32 bit pointer that points to the physical memory address of the Firmware ACPI Control Structure (FACS). There is also an extended 64 bit version of this, the X_FIRMWARE_CTRL pointer that also can point to the FACS. Section 5.2.9 of the ACPI specification states that if the X_FIRMWARE_CTRL field contains a non zero value then the FIRMWARE_CTRL field *must* be zero. This error is also detected by the Linux kernel. If FIRMWARE_CTRL and X_FIRMWARE_CTRL are defined, then the kernel just uses the 64 bit version of the pointer. PASSED: Test 1, Table HPET passed. PASSED: Test 1, Table MCFG passed. PASSED: Test 1, Table RSDT passed. PASSED: Test 1, Table RSDP passed. Table SBST not present to check. PASSED: Test 1, Table XSDT passed. ==================================================================================================== Re-assemble DSDT and find syntax errors and warnings. ---------------------------------------------------------------------------------------------------- Test 1 of 2: Disassemble and reassemble DSDT FAILED [HIGH] AMLAssemblerError4043: Test 1, Assembler error in line 2261 Line | AML source ---------------------------------------------------------------------------------------------------- 02258| 0x00000000, // Range Minimum 02259| 0xFEDFFFFF, // Range Maximum 02260| 0x00000000, // Translation Offset 02261| 0x00000000, // Length | ^ | error 4043: Invalid combination of Length and Min/Max fixed flags 02262| ,, _Y0E, AddressRangeMemory, TypeStatic) 02263| DWordMemory (ResourceProducer, PosDecode, MinFixed, MaxFixed, Cacheable, ReadWrite, 02264| 0x00000000, // Granularity ==================================================================================================== ADVICE: (for error #4043): This occurs if the length is zero and just one of the resource MIF/MAF flags are set, or the length is non-zero and resource MIF/MAF flags are both set. These are illegal combinations and need to be fixed. See section 6.4.3.5 Address Space Resource Descriptors of version 4.0a of the ACPI specification for more details. FAILED [HIGH] AMLAssemblerError4050: Test 1, Assembler error in line 2268 Line | AML source ---------------------------------------------------------------------------------------------------- 02265| 0xFEE01000, // Range Minimum 02266| 0xFFFFFFFF, // Range Maximum 02267| 0x00000000, // Translation Offset 02268| 0x011FEFFF, // Length | ^ | error 4050: Length is not equal to fixed Min/Max window 02269| ,, , AddressRangeMemory, TypeStatic) 02270| }) 02271| Method (_CRS, 0, Serialized) ==================================================================================================== ADVICE: (for error #4050): The minimum address is greater than the maximum address. This is illegal. FAILED [HIGH] AMLAssemblerError1104: Test 1, Assembler error in line 8885 Line | AML source ---------------------------------------------------------------------------------------------------- 08882| Method (_DIS, 0, NotSerialized) 08883| { 08884| DSOD (0x02) 08885| Return (0x00) | ^ | warning level 0 1104: Reserved method should not return a value (_DIS) 08886| } 08887| 08888| Method (_SRS, 1, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 1, Assembler error in line 9195 Line | AML source ---------------------------------------------------------------------------------------------------- 09192| Method (_DIS, 0, NotSerialized) 09193| { 09194| DSOD (0x01) 09195| Return (0x00) | ^ | warning level 0 1104: Reserved method should not return a value (_DIS) 09196| } 09197| 09198| Method (_SRS, 1, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1127: Test 1, Assembler error in line 9242 Line | AML source ---------------------------------------------------------------------------------------------------- 09239| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y21._MAX, MAX2) 09240| CreateByteField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y21._LEN, LEN2) 09241| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y22._INT, IRQ0) 09242| CreateWordField (CRES, \_SB.PCI0.LPCB.SIO.LPT0._CRS._Y23._DMA, DMA0) | ^ | warning level 0 1127: ResourceTag smaller than Field (Tag: 8 bits, Field: 16 bits) 09243| If (RLPD) 09244| { 09245| Store (0x00, Local0) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1128: Test 1, Assembler error in line 18682 Line | AML source ---------------------------------------------------------------------------------------------------- 18679| Store (0x01, Index (DerefOf (Index (Local0, 0x02)), 0x01)) 18680| If (And (WDPE, 0x40)) 18681| { 18682| Wait (\_SB.BEVT, 0x10) | ^ | warning level 0 1128: Result is not used, possible operator timeout will be missed 18683| } 18684| 18685| Store (BRID, Index (DerefOf (Index (Local0, 0x02)), 0x02)) ==================================================================================================== ADVICE: (for warning level 0 #1128): The operation can possibly timeout, and hence the return value indicates an timeout error. However, because the return value is not checked this very probably indicates that the code is buggy. A possible scenario is that a mutex times out and the code attempts to access data in a critical region when it should not. This will lead to undefined behaviour. This should be fixed. Table DSDT (0) reassembly: Found 2 errors, 4 warnings. Test 2 of 2: Disassemble and reassemble SSDT PASSED: Test 2, SSDT (0) reassembly, Found 0 errors, 0 warnings. FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 60 Line | AML source ---------------------------------------------------------------------------------------------------- 00057| { 00058| Store (CPDC (Arg0), Local0) 00059| GCAP (Local0) 00060| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00061| } 00062| 00063| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 174 Line | AML source ---------------------------------------------------------------------------------------------------- 00171| { 00172| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00173| GCAP (Local0) 00174| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00175| } 00176| 00177| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 244 Line | AML source ---------------------------------------------------------------------------------------------------- 00241| { 00242| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00243| GCAP (Local0) 00244| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00245| } 00246| 00247| Method (_OSC, 4, NotSerialized) ==================================================================================================== FAILED [HIGH] AMLAssemblerError1104: Test 2, Assembler error in line 290 Line | AML source ---------------------------------------------------------------------------------------------------- 00287| { 00288| Store (\_PR.CPU0.CPDC (Arg0), Local0) 00289| GCAP (Local0) 00290| Return (Local0) | ^ | warning level 0 1104: Reserved method should not return a value (_PDC) 00291| } 00292| 00293| Method (_OSC, 4, NotSerialized) ==================================================================================================== Table SSDT (1) reassembly: Found 0 errors, 4 warnings. PASSED: Test 2, SSDT (2) reassembly, Found 0 errors, 0 warnings. PASSED: Test 2, SSDT (3) reassembly, Found 0 errors, 0 warnings. ==================================================================================================== 3 passed, 10 failed, 0 warnings, 0 aborted, 0 skipped, 0 info only. ==================================================================================================== Critical failures: 1 method test, at 1 log line: 1449: Detected error 'Package limit' when evaluating '\_SB_.PCI0.GFX0.DD02._BCM'. High failures: 11 klog test, at 1 log line: 121: HIGH Kernel message: [ 3.512783] ACPI Error: Method parse/execution failed [\_SB_.PCI0.GFX0._DOD] (Node f7425858), AE_AML_PACKAGE_LIMIT (20110623/psparse-536) syntaxcheck test, at 1 log line: 1668: Assembler error in line 2261 syntaxcheck test, at 1 log line: 1687: Assembler error in line 2268 syntaxcheck test, at 1 log line: 1703: Assembler error in line 8885 syntaxcheck test, at 1 log line: 1716: Assembler error in line 9195 syntaxcheck test, at 1 log line: 1729: Assembler error in line 9242 syntaxcheck test, at 1 log line: 1742: Assembler error in line 18682 syntaxcheck test, at 1 log line: 1766: Assembler error in line 60 syntaxcheck test, at 1 log line: 1779: Assembler error in line 174 syntaxcheck test, at 1 log line: 1792: Assembler error in line 244 syntaxcheck test, at 1 log line: 1805: Assembler error in line 290 Medium failures: 9 mtrr test, at 1 log line: 76: Memory range 0xc0000000 to 0xdfffffff (PCI Bus 0000:00) has incorrect attribute Write-Combining. mtrr test, at 1 log line: 78: Memory range 0xfee01000 to 0xffffffff (PCI Bus 0000:00) has incorrect attribute Write-Protect. msr test, at 1 log line: 165: MSR SYSENTER_ESP (0x175) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0xffffffffffffffff). msr test, at 1 log line: 173: MSR MISC_ENABLE (0x1a0) has 1 inconsistent values across 2 CPUs for (shift: 0 mask: 0x400c51889). wmi test, at 1 log line: 528: GUID 2B814318-4BE8-4707-9D84-A190A859B5D0 is unknown to the kernel, a driver may need to be implemented for this GUID. method test, at 1 log line: 1002: \_SB_.PCI0.LPCB.SIO_.COM1._DIS returned values, but was expected to return nothing. method test, at 1 log line: 1011: \_SB_.PCI0.LPCB.SIO_.LPT0._DIS returned values, but was expected to return nothing. method test, at 1 log line: 1443: Method _BCL should return a package of more than 2 integers, got just 2. acpitables test, at 1 log line: 1643: FADT 32 bit FIRMWARE_CONTROL is non-zero, and X_FIRMWARE_CONTROL is also non-zero. Se

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • iPhone SDK vs Windows Phone 7 Series SDK Challenge, Part 1: Hello World!

    In this series, I will be taking sample applications from the iPhone SDK and implementing them on Windows Phone 7 Series.  My goal is to do as much of an apples-to-apples comparison as I can.  This series will be written to not only compare and contrast how easy or difficult it is to complete tasks on either platform, how many lines of code, etc., but Id also like it to be a way for iPhone developers to either get started on Windows Phone 7 Series development, or for developers in general to learn the platform. Heres my methodology: Run the iPhone SDK app in the iPhone Simulator to get a feel for what it does and how it works, without looking at the implementation Implement the equivalent functionality on Windows Phone 7 Series using Silverlight. Compare the two implementations based on complexity, functionality, lines of code, number of files, etc. Add some functionality to the Windows Phone 7 Series app that shows off a way to make the scenario more interesting or leverages an aspect of the platform, or uses a better design pattern to implement the functionality. You can download Microsoft Visual Studio 2010 Express for Windows Phone CTP here, and the Expression Blend 4 Beta here. Hello World! Of course no first post would be allowed if it didnt focus on the hello world scenario.  The iPhone SDK follows that tradition with the Your First iPhone Application walkthrough.  I will say that the developer documentation for iPhone is pretty good.  There are plenty of walkthoughs and they break things down into nicely sized steps and do a good job of bringing the user along.  As expected, this application is quite simple.  It comprises of a text box, a label, and a button.  When you push the button, the label changes to Hello plus the  word you typed into the text box.  Makes perfect sense for a starter application.  Theres not much to this but it covers a few basic elements: Laying out basic UI Handling user input Hooking up events Formatting text     So, lets get started building a similar app for Windows Phone 7 Series! Implementing the UI: UI in Silverlight (and therefore Windows Phone 7) is defined in XAML, which is a declarative XML language also used by WPF on the desktop.  For anyone thats familiar with similar types of markup, its relatively straightforward to learn, but has a lot of power in it once you get it figured out.  Well talk more about that. This UI is very simple.  When I look at this, I note a couple of things: Elements are arranged vertically They are all centered So, lets create our Application and then start with the UI.  Once you have the the VS 2010 Express for Windows Phone tool running, create a new Windows Phone Project, and call it Hello World: Once created, youll see the designer on one side and your XAML on the other: Now, we can create our UI in one of three ways: Use the designer in Visual Studio to drag and drop the components Use the designer in Expression Blend 4 to drag and drop the components Enter the XAML by hand in either of the above Well start with (1), then kind of move to (3) just for instructional value. To develop this UI in the designer: First, delete all of the markup between inside of the Grid element (LayoutRoot).  You should be left with just this XAML for your MainPage.xaml (i shortened all the xmlns declarations below for brevity): 1: <phoneNavigation:PhoneApplicationPage 2: x:Class="HelloWorld.MainPage" 3: xmlns="...[snip]" 4: FontFamily="{StaticResource PhoneFontFamilyNormal}" 5: FontSize="{StaticResource PhoneFontSizeNormal}" 6: Foreground="{StaticResource PhoneForegroundBrush}"> 7:   8: <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> 9:   10: </Grid> 11:   12: </phoneNavigation:PhoneApplicationPage> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Well be adding XAML at line 9, so thats the important part. Now, Click on the center area of the phone surface Open the Toolbox and double click StackPanel Double click TextBox Double click TextBlock Double click Button That will create the necessary UI elements but they wont be arranged quite right.  Well fix it in a second.    Heres the XAML that we end up with: 1: <StackPanel Height="100" HorizontalAlignment="Left" Margin="10,10,0,0" Name="stackPanel1" VerticalAlignment="Top" Width="200"> 2: <TextBox Height="32" Name="textBox1" Text="TextBox" Width="100" /> 3: <TextBlock Height="23" Name="textBlock1" Text="TextBlock" /> 4: <Button Content="Button" Height="70" Name="button1" Width="160" /> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The designer does its best at guessing what we want, but in this case we want things to be a bit simpler. So well just clean it up a bit.  We want the items to be centered and we want them to have a little bit of a margin on either side, so heres what we end up with.  Ive also made it match the values and style from the iPhone app: 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <TextBlock Name="textBlock1" HorizontalAlignment="Center" Margin="0,100,0,0" Text="Hello You!" /> 4: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello"/> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now lets take a look at what weve done there. Line 1: We removed all of the formatting from the StackPanel, except for Margin, as thats all we need.  Since our parent element is a Grid, by default the StackPanel will be sized to fit in that space.  The Margin says that we want to reserve 10 pixels on each side of the StackPanel. Line 2: Weve set the HorizontalAlignment of the TextBox to Stretch, which says that it should fill its parents size horizontally.  We want to do this so the TextBox is always full-width.  We also set TextAlignment to Center, to center the text. Line 3: In contrast to the TextBox above, we dont care how wide the TextBlock is, just so long as it is big enough for its text.  Thatll happen automatically, so we just set its Horizontal alignment to Center.  We also set a Margin above the TextBlock of 100 pixels to bump it down a bit, per the iPhone UI. Line 4: We do the same things here as in Line 3. Heres how the UI looks in the designer: Believe it or not, were almost done! Implementing the App Logic Now, we want the TextBlock to change its text when the Button is clicked.  In the designer, double click the Button to be taken to the Event Handler for the Buttons Click event.  In that event handler, we take the Text property from the TextBox, and format it into a string, then set it into the TextBlock.  Thats it! 1: private void button1_Click(object sender, RoutedEventArgs e) 2: { 3: string name = textBox1.Text; 4:   5: // if there isn't a name set, just use "World" 6: if (String.IsNullOrEmpty(name)) 7: { 8: name = "World"; 9: } 10:   11: // set the value into the TextBlock 12: textBlock1.Text = String.Format("Hello {0}!", name); 13:   14: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We use the String.Format() method to handle the formatting for us.    Now all thats left is to test the app in the Windows Phone Emulator and verify it does what we think it does! And it does! Comparing against the iPhone Looking at the iPhone example, there are basically three things that you have to touch as the developer: 1) The UI in the Nib file 2) The app delegate 3) The view controller Counting lines is a bit tricky here, but to try to keep this even, Im going to only count lines of code that I could not have (or would not have) generated with the tooling.  Meaning, Im not counting XAML and Im not counting operations that happen in the Nib file with the XCode designer tool.  So in the case of the above, even though I modified the XAML, I could have done all of those operations using the visual designer tool.  And normally I would have, but the XAML is more instructive (and less steps!).  Im interested in things that I, as the developer have to figure out in code.  Im also not counting lines that just have a curly brace on them, or lines that are generated for me (e.g. method names that are generated for me when I make a connection, etc.) So, by that count, heres what I get from the code listing for the iPhone app found here: HelloWorldAppDelegate.h: 6 HelloWorldAppDelegate.m: 12 MyViewController.h: 8 MyViewController.m: 18 Which gives me a grand total of about 44 lines of code on iPhone.  I really do recommend looking at the iPhone code for a comparison to the above. Now, for the Windows Phone 7 Series application, the only code I typed was in the event handler above Main.Xaml.cs: 4 So a total of 4 lines of code on Windows Phone 7.  And more importantly, the process is just A LOT simpler.  For example, I was surprised that the User Interface Designer in XCode doesnt automatically create instance variables for me and wire them up to the corresponding elements.  I assumed I wouldnt have to write this code myself (and risk getting it wrong!).  I dont need to worry about view controllers or anything.  I just write my code.  This blog post up to this point has covered almost every aspect of this apps development in a few pages.  The iPhone tutorial has 5 top level steps with 2-3 sub sections of each. Now, its worth pointing out that the iPhone development model uses the Model View Controller (MVC) pattern, which is a very flexible and powerful pattern that enforces proper separation of concerns.  But its fairly complex and difficult to understand when you first walk up to it.  Here at Microsoft weve dabbled in MVC a bit, with frameworks like MFC on Visual C++ and with the ASP.NET MVC framework now.  Both are very powerful frameworks.  But one of the reasons weve stayed away from MVC with client UI frameworks is that its difficult to tool.  We havent seen the type of value that beats double click, write code! for the broad set of scenarios. Another thing to think about is how many of those lines of code were focused on my apps functionality?.  Or, the converse of How many lines of code were boilerplate plumbing?  In both examples, the actual number of functional code lines is similar.  I count most of them in MyViewController.m, in the changeGreeting method.  Its about 7 lines of code that do the work of taking the value from the TextBox and putting it into the label.  Versus 4 on the Windows Phone 7 side.  But, unfortunately, on iPhone I still have to write that other 37 lines of code, just to get there. 10% of the code, 1 file instead of 4, its just much simpler. Making Some Tweaks It turns out, I can actually do this application with ZERO  lines of code, if Im willing to change the spec a bit. The data binding functionality in Silverlight is incredibly powerful.  And what I can do is databind the TextBoxs value directly to the TextBlock.  Take some time looking at this XAML below.  Youll see that I have added another nested StackPanel and two more TextBlocks.  Why?  Because thats how I build that string, and the nested StackPanel will lay things out Horizontally for me, as specified by the Orientation property. 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <StackPanel Orientation="Horizontal" HorizontalAlignment="Center" Margin="0,100,0,0" > 4: <TextBlock Text="Hello " /> 5: <TextBlock Name="textBlock1" Text="{Binding ElementName=textBox1, Path=Text}" /> 6: <TextBlock Text="!" /> 7: </StackPanel> 8: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello" Click="button1_Click" /> 9: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, the real action is there in the bolded TextBlock.Text property: Text="{Binding ElementName=textBox1, Path=Text}" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That does all the heavy lifting.  It sets up a databinding between the TextBox.Text property on textBox1 and the TextBlock.Text property on textBlock1. As I change the text of the TextBox, the label updates automatically. In fact, I dont even need the button any more, so I could get rid of that altogether.  And no button means no event handler.  No event handler means no C# code at all.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Retrieving Json Array

    - by Rahul Varma
    Hi, I am trying to retrieve the values from the following url: http://rentopoly.com/ajax.php?query=Bo. I want to get the values of all the suggestions to be displayed in a list view one by one. This is how i want to do... public class AlertsAdd { public ArrayList<JSONObject> retrieveJSONArray(String urlString) { String result = queryRESTurl(urlString); ArrayList<JSONObject> ALERTS = new ArrayList<JSONObject>(); if (result != null) { try { JSONObject json = new JSONObject(result); JSONArray alertsArray = json.getJSONArray("suggestions"); for (int a = 0; a < alertsArray.length(); a++) { JSONObject alertitem = alertsArray.getJSONObject(a); ALERTS.add(alertitem); } return ALERTS; } catch (JSONException e) { Log.e("JSON", "There was an error parsing the JSON", e); } } JSONObject myObject = new JSONObject(); try { myObject.put("suggestions",myObject.getJSONArray("suggestions")); ALERTS.add(myObject); } catch (JSONException e1) { Log.e("JSON", "There was an error creating the JSONObject", e1); } return ALERTS; } private String queryRESTurl(String url) { // URLConnection connection; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpResponse response; try { response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); if (entity != null) { InputStream instream = entity.getContent(); String result = convertStreamToString(instream); instream.close(); return result; } } catch (ClientProtocolException e) { Log.e("REST", "There was a protocol based error", e); } catch (IOException e) { Log.e("REST", "There was an IO Stream related error", e); } return null; } /** * To convert the InputStream to String we use the * BufferedReader.readLine() method. We iterate until the BufferedReader * return null which means there's no more data to read. Each line will * appended to a StringBuilder and returned as String. */ private String convertStreamToString(InputStream is) { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); StringBuilder sb = new StringBuilder(); String line = null; try { while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } } catch (IOException e) { e.printStackTrace(); } finally { try { is.close(); } catch (IOException e) { e.printStackTrace(); } } return sb.toString(); } } Here's the adapter code... public class AlertsAdapter extends ArrayAdapter<JSONObject> { public AlertsAdapter(Activity activity, List<JSONObject> alerts) { super(activity, 0, alerts); } @Override public View getView(int position, View convertView, ViewGroup parent) { Activity activity = (Activity) getContext(); LayoutInflater inflater = activity.getLayoutInflater(); View rowView = inflater.inflate(R.layout.list_text, null); JSONObject imageAndText = getItem(position); TextView textView = (TextView) rowView.findViewById(R.id.last_build_stat); try { textView.setText((String)imageAndText.get("suggestions")); } catch (JSONException e) { textView.setText("JSON Exception"); } return rowView; } } Here's the logcat... 04-30 13:09:46.656: INFO/ActivityManager(584): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.WorldToyota/.Alerts } 04-30 13:09:50.417: ERROR/JSON(924): There was an error parsing the JSON 04-30 13:09:50.417: ERROR/JSON(924): org.json.JSONException: JSONArray[0] is not a JSONObject. 04-30 13:09:50.417: ERROR/JSON(924): at org.json.JSONArray.getJSONObject(JSONArray.java:268) 04-30 13:09:50.417: ERROR/JSON(924): at com.WorldToyota.AlertsAdd.retrieveJSONArray(AlertsAdd.java:30) 04-30 13:09:50.417: ERROR/JSON(924): at com.WorldToyota.Alerts.onCreate(Alerts.java:20) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 04-30 13:09:50.417: ERROR/JSON(924): at android.os.Handler.dispatchMessage(Handler.java:99) 04-30 13:09:50.417: ERROR/JSON(924): at android.os.Looper.loop(Looper.java:123) 04-30 13:09:50.417: ERROR/JSON(924): at android.app.ActivityThread.main(ActivityThread.java:4203) 04-30 13:09:50.417: ERROR/JSON(924): at java.lang.reflect.Method.invokeNative(Native Method) 04-30 13:09:50.417: ERROR/JSON(924): at java.lang.reflect.Method.invoke(Method.java:521) 04-30 13:09:50.417: ERROR/JSON(924): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 04-30 13:09:50.417: ERROR/JSON(924): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 04-30 13:09:50.417: ERROR/JSON(924): at dalvik.system.NativeStart.main(Native Method) 04-30 13:09:50.688: ERROR/JSON(924): There was an error creating the JSONObject 04-30 13:09:50.688: ERROR/JSON(924): org.json.JSONException: JSONObject["suggestions"] not found. 04-30 13:09:50.688: ERROR/JSON(924): at org.json.JSONObject.get(JSONObject.java:287) 04-30 13:09:50.688: ERROR/JSON(924): at org.json.JSONObject.getJSONArray(JSONObject.java:362) 04-30 13:09:50.688: ERROR/JSON(924): at com.WorldToyota.AlertsAdd.retrieveJSONArray(AlertsAdd.java:41) 04-30 13:09:50.688: ERROR/JSON(924): at com.WorldToyota.Alerts.onCreate(Alerts.java:20) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 04-30 13:09:50.688: ERROR/JSON(924): at android.os.Handler.dispatchMessage(Handler.java:99) 04-30 13:09:50.688: ERROR/JSON(924): at android.os.Looper.loop(Looper.java:123) 04-30 13:09:50.688: ERROR/JSON(924): at android.app.ActivityThread.main(ActivityThread.java:4203) 04-30 13:09:50.688: ERROR/JSON(924): at java.lang.reflect.Method.invokeNative(Native Method) 04-30 13:09:50.688: ERROR/JSON(924): at java.lang.reflect.Method.invoke(Method.java:521) 04-30 13:09:50.688: ERROR/JSON(924): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 04-30 13:09:50.688: ERROR/JSON(924): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 04-30 13:09:50.688: ERROR/JSON(924): at dalvik.system.NativeStart.main(Native Method) Plz help me parsing this script and displaying the values in list format....

    Read the article

  • Internal Server Error with mod_wsgi [django] on windows xp

    - by sacabuche
    when i run development server it works very well, even an empty project runing in mod_wsgi i have no problem but when i want to put my own project i get an Internal Server Error (500) in my apache conf i put WSGIScriptAlias /codevents C:/django/apache/CODEvents.wsgi <Directory "C:/django/apache"> Order allow,deny Allow from all </Directory> Alias /codevents/media C:/django/projects/CODEvents/html <Directory "C:/django/projects/CODEvents/html"> Order allow,deny Allow from all </Directory> in CODEvents.wsgi import os, sys sys.path.append('C:\\Python26\\Lib\\site-packages\\django') sys.path.append('C:\\django\\projects') sys.path.append('C:\\django\\apps') sys.path.append('C:\\django\\projects\\CODEvents') os.environ['DJANGO_SETTINGS_MODULE'] = 'CODEvents.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() and in my error_log it said: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] mod_wsgi (pid=1848): Exception occurred processing WSGI script 'C:/django/apache/CODEvents.wsgi'. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 241, in __call__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] response = self.get_response(request) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 142, in get_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.handle_uncaught_exception(request, resolver, exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 166, in handle_uncaught_exception [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return debug.technical_500_response(request, *exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 58, in technical_500_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] html = reporter.get_traceback_html() [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 137, in get_traceback_html [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return t.render(c) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 173, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self._render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 167, in _render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.nodelist.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 796, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] bits.append(self.render_node(node, context)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 72, in render_node [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] result = node.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 89, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] output = self.filter_expression.resolve(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 579, in resolve [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] new_obj = func(obj, *arg_vals) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\defaultfilters.py", line 693, in date [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return format(value, arg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 281, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return df.format(format_string) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 187, in r [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.format('D, j M Y H:i:s O') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\encoding.py", line 66, in force_unicode [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] s = unicode(s) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 206, in __unicode_cast [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.__func(*self.__args, **self.__kw) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 55, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return real_ugettext(message) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 55, in _curried [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 36, in delayed_loader [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return getattr(trans, real_name)(*args, **kwargs) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 276, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return do_translate(message, 'ugettext') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 266, in do_translate [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] _default = translation(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 176, in translation [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] default_translation = _fetch(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 159, in _fetch [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] app = import_module(appname) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\importlib.py", line 35, in import_module [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] __import__(name) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\__init__.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\helpers.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django import forms [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\__init__.py", line 17, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from models import * [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\models.py", line 6, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.db import connections [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\__init__.py", line 75, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] connection = connections[DEFAULT_DB_ALIAS] [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 91, in __getitem__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] backend = load_backend(db['ENGINE']) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 49, in load_backend [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] raise ImproperlyConfigured(error_msg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Try using django.db.backends.XXX, where XXX is one of: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Error was: cannot import name utils please help me!!

    Read the article

  • Internal Server Error with mod_wgsi [django] on windows xp

    - by sacabuche
    when i run development server it works very well, even an empty project runing in mod_wsgi i have no problem but when i want to put my own project i get an Internal Server Error (500) in my apache conf i put WSGIScriptAlias /codevents C:/django/apache/CODEvents.wsgi <Directory "C:/django/apache"> Order allow,deny Allow from all </Directory> Alias /codevents/media C:/django/projects/CODEvents/html <Directory "C:/django/projects/CODEvents/html"> Order allow,deny Allow from all </Directory> in CODEvents.wsgi import os, sys sys.path.append('C:\\Python26\\Lib\\site-packages\\django') sys.path.append('C:\\django\\projects') sys.path.append('C:\\django\\apps') sys.path.append('C:\\django\\projects\\CODEvents') os.environ['DJANGO_SETTINGS_MODULE'] = 'CODEvents.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() and in my error_log it said: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] mod_wsgi (pid=1848): Exception occurred processing WSGI script 'C:/django/apache/CODEvents.wsgi'. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 241, in __call__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] response = self.get_response(request) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 142, in get_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.handle_uncaught_exception(request, resolver, exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 166, in handle_uncaught_exception [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return debug.technical_500_response(request, *exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 58, in technical_500_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] html = reporter.get_traceback_html() [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 137, in get_traceback_html [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return t.render(c) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 173, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self._render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 167, in _render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.nodelist.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 796, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] bits.append(self.render_node(node, context)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 72, in render_node [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] result = node.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 89, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] output = self.filter_expression.resolve(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 579, in resolve [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] new_obj = func(obj, *arg_vals) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\defaultfilters.py", line 693, in date [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return format(value, arg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 281, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return df.format(format_string) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 187, in r [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.format('D, j M Y H:i:s O') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\encoding.py", line 66, in force_unicode [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] s = unicode(s) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 206, in __unicode_cast [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.__func(*self.__args, **self.__kw) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 55, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return real_ugettext(message) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 55, in _curried [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 36, in delayed_loader [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return getattr(trans, real_name)(*args, **kwargs) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 276, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return do_translate(message, 'ugettext') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 266, in do_translate [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] _default = translation(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 176, in translation [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] default_translation = _fetch(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 159, in _fetch [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] app = import_module(appname) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\importlib.py", line 35, in import_module [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] __import__(name) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\__init__.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\helpers.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django import forms [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\__init__.py", line 17, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from models import * [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\models.py", line 6, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.db import connections [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\__init__.py", line 75, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] connection = connections[DEFAULT_DB_ALIAS] [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 91, in __getitem__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] backend = load_backend(db['ENGINE']) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 49, in load_backend [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] raise ImproperlyConfigured(error_msg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Try using django.db.backends.XXX, where XXX is one of: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Error was: cannot import name utils please help me!!

    Read the article

  • Android WebView not loading a JavaScript file, but Android Browser loads it fine.

    - by Justin
    I'm writing an application which connects to a back office site. The backoffice site contains a whole slew of JavaScript functions, at least 100 times the average site. Unfortunately it does not load them, and causes much of the functionality to not work properly. So I am running a test. I put a page out on my server which loads the FireBugLite javascript text. Its a lot of javascript and perfect to test and see if the Android WebView will load it. The WebView loads nothing, but the browser loads the Firebug Icon. What on earth would make the difference, why can it run in the browser and not in my WebView? Any suggestions. More background information, in order to get the stinking backoffice application available on a Droid (or any other platform except windows) I needed to trick the bakcoffice application to believe what's accessing the website is Internet Explorer. I do this by modifying the WebView User Agent. Also for this application I've slimmed my landing page, so I could give you the source to offer me aid. package ksc.myKMB; import android.app.Activity; import android.app.AlertDialog; import android.app.Dialog; import android.app.ProgressDialog; import android.content.DialogInterface; import android.graphics.Bitmap; import android.os.Bundle; import android.view.Menu; import android.view.MenuInflater; import android.view.MenuItem; import android.view.Window; import android.webkit.WebChromeClient; import android.webkit.WebView; import android.webkit.WebSettings; import android.webkit.WebViewClient; import android.widget.Toast; public class myKMB extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); /** Performs base set up */ /** Create a Activity of this Activity, IE myProcess */ myProcess = this; /*** Create global objects and web browsing objects */ HideDialogOnce = true; webview = new WebView(this) { }; webChromeClient = new WebChromeClient() { public void onProgressChanged(WebView view, int progress) { // Activities and WebViews measure progress with different scales. // The progress meter will automatically disappear when we reach 100% myProcess.setProgress((progress * 100)); //CreateMessage("Progress is : " + progress); } }; webViewClient = new WebViewClient() { public void onReceivedError(WebView view, int errorCode, String description, String failingUrl) { Toast.makeText(myProcess, MessageBegText + description + MessageEndText, Toast.LENGTH_SHORT).show(); } public void onPageFinished (WebView view, String url) { /** Hide dialog */ try { // loadingDialog.dismiss(); } finally { } //myProcess.setProgress(1000); /** Fon't show the dialog while I'm performing fixes */ //HideDialogOnce = true; view.loadUrl("javascript:document.getElementById('JTRANS011').style.visibility='visible';"); } public void onPageStarted(WebView view, String url, Bitmap favicon) { if (HideDialogOnce == false) { //loadingDialog = ProgressDialog.show(myProcess, "", // "One moment, the page is laoding...", true); } else { //HideDialogOnce = true; } } }; getWindow().requestFeature(Window.FEATURE_PROGRESS); webview.setWebChromeClient(webChromeClient); webview.setWebViewClient(webViewClient); setContentView(webview); /** Load the Keynote Browser Settings */ LoadSettings(); webview.loadUrl(LandingPage); } /** Get Menu */ @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.menu, menu); return true; } /** an item gets pushed */ @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { // We have only one menu option case R.id.quit: System.exit(0); break; case R.id.back: webview.goBack(); case R.id.refresh: webview.reload(); case R.id.info: //IncludeJavascript(""); } return true; } /** Begin Globals */ public WebView webview; public WebChromeClient webChromeClient; public WebViewClient webViewClient; public ProgressDialog loadingDialog; public Boolean HideDialogOnce; public Activity myProcess; public String OverideUserAgent_IE = "Mozilla/5.0 (Windows; MSIE 6.0; Android 1.6; en-US) AppleWebKit/525.10+ (KHTML, like Gecko) Version/3.0.4 Safari/523.12.2 myKMB/1.0"; public String LandingPage = "http://kscserver.com/main-leap-slim.html"; public String MessageBegText = "Problem making a connection, Details: "; public String MessageEndText = " For Support Call: (xxx) xxx - xxxx."; public void LoadSettings() { webview.getSettings().setUserAgentString(OverideUserAgent_IE); webview.getSettings().setJavaScriptEnabled(true); webview.getSettings().setBuiltInZoomControls(true); webview.getSettings().setSupportZoom(true); } /** Creates a message alert dialog */ public void CreateMessage(String message) { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setMessage(message) .setCancelable(true) .setNegativeButton("Close", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }); AlertDialog alert = builder.create(); alert.show(); } } My Application is running in the background, and as you can see no Firebug in the lower right hand corner. However the browser (the emulator on top) has the same page but shows the firebug. What am I doing wrong? I'm assuming its either not enough memory allocated to the application, process power allocation, or a physical memory thing. I can't tell, all I know is the results are strange. I get the same thing form my android device, the application shows no firebug but the browser shows the firebug.

    Read the article

  • C# .Net 3.5 Asynchronous Socket Server Performance Problem

    - by iBrAaAa
    I'm developing an Asynchronous Game Server using .Net Socket Asynchronous Model( BeginAccept/EndAccept...etc.) The problem I'm facing is described like that: When I have only one client connected, the server response time is very fast but once a second client connects, the server response time increases too much. I've measured the time from a client sends a message to the server until it gets the reply in both cases. I found that the average time in case of one client is about 17ms and in case of 2 clients about 280ms!!! What I really see is that: When 2 clients are connected and only one of them is moving(i.e. requesting service from the server) it is equivalently equal to the case when only one client is connected(i.e. fast response). However, when the 2 clients move at the same time(i.e. requests service from the server at the same time) their motion becomes very slow (as if the server replies each one of them in order i.e. not simultaneously). Basically, what I am doing is that: When a client requests a permission for motion from the server and the server grants him the request, the server then broadcasts the new position of the client to all the players. So if two clients are moving in the same time, the server is eventually trying to broadcast to both clients the new position of each of them at the same time. EX: Client1 asks to go to position (2,2) Client2 asks to go to position (5,5) Server sends to each of Client1 & Client2 the same two messages: message1: "Client1 at (2,2)" message2: "Client2 at (5,5)" I believe that the problem comes from the fact that Socket class is thread safe according MSDN documentation http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx. (NOT SURE THAT IT IS THE PROBLEM) Below is the code for the server: /// /// This class is responsible for handling packet receiving and sending /// public class NetworkManager { /// /// An integer to hold the server port number to be used for the connections. Its default value is 5000. /// private readonly int port = 5000; /// /// hashtable contain all the clients connected to the server. /// key: player Id /// value: socket /// private readonly Hashtable connectedClients = new Hashtable(); /// /// An event to hold the thread to wait for a new client /// private readonly ManualResetEvent resetEvent = new ManualResetEvent(false); /// /// keeps track of the number of the connected clients /// private int clientCount; /// /// The socket of the server at which the clients connect /// private readonly Socket mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); /// /// The socket exception that informs that a client is disconnected /// private const int ClientDisconnectedErrorCode = 10054; /// /// The only instance of this class. /// private static readonly NetworkManager networkManagerInstance = new NetworkManager(); /// /// A delegate for the new client connected event. /// /// the sender object /// the event args public delegate void NewClientConnected(Object sender, SystemEventArgs e); /// /// A delegate for the position update message reception. /// /// the sender object /// the event args public delegate void PositionUpdateMessageRecieved(Object sender, PositionUpdateEventArgs e); /// /// The event which fires when a client sends a position message /// public PositionUpdateMessageRecieved PositionUpdateMessageEvent { get; set; } /// /// keeps track of the number of the connected clients /// public int ClientCount { get { return clientCount; } } /// /// A getter for this class instance. /// /// only instance. public static NetworkManager NetworkManagerInstance { get { return networkManagerInstance; } } private NetworkManager() {} /// Starts the game server and holds this thread alive /// public void StartServer() { //Bind the mainSocket to the server IP address and port mainSocket.Bind(new IPEndPoint(IPAddress.Any, port)); //The server starts to listen on the binded socket with max connection queue //1024 mainSocket.Listen(1024); //Start accepting clients asynchronously mainSocket.BeginAccept(OnClientConnected, null); //Wait until there is a client wants to connect resetEvent.WaitOne(); } /// /// Receives connections of new clients and fire the NewClientConnected event /// private void OnClientConnected(IAsyncResult asyncResult) { Interlocked.Increment(ref clientCount); ClientInfo newClient = new ClientInfo { WorkerSocket = mainSocket.EndAccept(asyncResult), PlayerId = clientCount }; //Add the new client to the hashtable and increment the number of clients connectedClients.Add(newClient.PlayerId, newClient); //fire the new client event informing that a new client is connected to the server if (NewClientEvent != null) { NewClientEvent(this, System.EventArgs.Empty); } newClient.WorkerSocket.BeginReceive(newClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), newClient); //Start accepting clients asynchronously again mainSocket.BeginAccept(OnClientConnected, null); } /// Waits for the upcoming messages from different clients and fires the proper event according to the packet type. /// /// private void WaitForData(IAsyncResult asyncResult) { ClientInfo sendingClient = null; try { //Take the client information from the asynchronous result resulting from the BeginReceive sendingClient = asyncResult.AsyncState as ClientInfo; // If client is disconnected, then throw a socket exception // with the correct error code. if (!IsConnected(sendingClient.WorkerSocket)) { throw new SocketException(ClientDisconnectedErrorCode); } //End the pending receive request sendingClient.WorkerSocket.EndReceive(asyncResult); //Fire the appropriate event FireMessageTypeEvent(sendingClient.ConvertBytesToPacket() as BasePacket); // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } catch (SocketException e) { if (e.ErrorCode == ClientDisconnectedErrorCode) { // Close the socket. if (sendingClient.WorkerSocket != null) { sendingClient.WorkerSocket.Close(); sendingClient.WorkerSocket = null; } // Remove it from the hash table. connectedClients.Remove(sendingClient.PlayerId); if (ClientDisconnectedEvent != null) { ClientDisconnectedEvent(this, new ClientDisconnectedEventArgs(sendingClient.PlayerId)); } } } catch (Exception e) { // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } } /// /// Broadcasts the input message to all the connected clients /// /// public void BroadcastMessage(BasePacket message) { byte[] bytes = message.ConvertToBytes(); foreach (ClientInfo client in connectedClients.Values) { client.WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, client); } } /// /// Sends the input message to the client specified by his ID. /// /// /// The message to be sent. /// The id of the client to receive the message. public void SendToClient(BasePacket message, int id) { byte[] bytes = message.ConvertToBytes(); (connectedClients[id] as ClientInfo).WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, connectedClients[id]); } private void SendAsync(IAsyncResult asyncResult) { ClientInfo currentClient = (ClientInfo)asyncResult.AsyncState; currentClient.WorkerSocket.EndSend(asyncResult); } /// Fires the event depending on the type of received packet /// /// The received packet. void FireMessageTypeEvent(BasePacket packet) { switch (packet.MessageType) { case MessageType.PositionUpdateMessage: if (PositionUpdateMessageEvent != null) { PositionUpdateMessageEvent(this, new PositionUpdateEventArgs(packet as PositionUpdatePacket)); } break; } } } The events fired are handled in a different class, here are the event handling code for the PositionUpdateMessage (Other handlers are irrelevant): private readonly Hashtable onlinePlayers = new Hashtable(); /// /// Constructor that creates a new instance of the GameController class. /// private GameController() { //Start the server server = new Thread(networkManager.StartServer); server.Start(); //Create an event handler for the NewClientEvent of networkManager networkManager.PositionUpdateMessageEvent += OnPositionUpdateMessageReceived; } /// /// this event handler is called when a client asks for movement. /// private void OnPositionUpdateMessageReceived(object sender, PositionUpdateEventArgs e) { Point currentLocation = ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position; Point locationRequested = e.PositionUpdatePacket.Position; ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position = locationRequested; // Broadcast the new position networkManager.BroadcastMessage(new PositionUpdatePacket { Position = locationRequested, PlayerId = e.PositionUpdatePacket.PlayerId }); }

    Read the article

  • Rails with Oracle often got "no listener" error

    - by qichunren
    I am on Rails 2.3.5 and use oracle 10 as my database,use oracle_adapter ,ruby-oci8 to connect oracle host. But I often got exception as the log info show: Completed in 463ms (View: 18, DB: 166) | 200 OK [http://192.168.30.128/auctions?page=1] /!\ FAILSAFE /!\ Mon Feb 01 19:02:11 +0800 2010 Status: 500 Internal Server Error ORA-12541: TNS:no listener env.c:257:in oci8lib.so /home/qichunren/.gem/ruby/1.8/gems/ruby-oci8-1.0.7/lib/oci8.rb:229:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:623:in `new' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:623:in `new_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:659:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:35:in `new' /home/qichunren/.gem/ruby/1.8/gems/activerecord-oracle-adapter-1.0.0.9250/lib/active_record/connection_adapters/oracle_adapter.rb:35:in `oracle_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' /usr/local/ruby187/lib/ruby/1.8/monitor.rb:242:in `synchronize' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in `cache' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in `call' /home/qichunren/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/params_parser.rb:15:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/session/cookie_store.rb:93:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/failsafe.rb:26:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchronize' /home/qichunren/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:114:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:34:in `run' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:108:in `call' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/cgi_process.rb:44:in `dispatch_cgi' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:101:in `dispatch_cgi' /home/qichunren/.gem/ruby/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:27:in `dispatch' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:76:in `process' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:74:in `synchronize' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:74:in `process' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:159:in `process_client' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:158:in `each' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:158:in `process_client' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `new' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `initialize' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `new' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:282:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:281:in `each' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:281:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:128:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/command.rb:212:in `run' /home/qichunren/.gem/ruby/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281 /home/qichunren/.gem/ruby/1.8/bin/mongrel_rails:19:in `load' /home/qichunren/.gem/ruby/1.8/bin/mongrel_rails:19 it seems that connection to oracle often disconnect.it show oracle error:ORA-12541: TNS:no listener How to fix this ? Many tks. oci8.c:270:in oci8lib.so: ORA-12541: TNS:no listener (OCIError) from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:223:in new' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:223:innew_connection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:328:in initialize' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:24:innew' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_oci_connection.rb:24:in initialize' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_connection.rb:9:innew' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_connection.rb:9:in create' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-oracle_enhanced-adapter-1.2.4/lib/active_record/connec tion_adapters/oracle_enhanced_adapter.rb:50:inoracle_enhanced_connection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:291:in send' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:291:inconnection=' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:259:in retrieve_connection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/connection_adapters/abstract/c onnection_specification.rb:78:inconnection' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:2438:in quoted_table_ name' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:1259:infind_one' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:1250:in find_from_ids ' from /opt/ruby-enterprise-1.8.7/lib/ruby/gems/1.8/gems/activerecord-2.0.2/lib/active_record/base.rb:504:infind' from script/maintenance/adjust_settlement.rb:19

    Read the article

  • java.lang.IllegalAccessException during Ant jwsc webservice build

    - by KevB
    Hi. I have a large application, part of which relies on a set of 3 webservices. I'm currently in the process of writing an Ant build script to build and package the application into an EAR file. When building the web sub-project for this application I use the <jwsc> task in Ant to compile the webservices. This causes an IllegalAccessException, as outlined in the stack trace below: [jwsc] warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds [jwsc] JWS: processing module weboutput [jwsc] Parsing source files [jwsc] Parsing source files [jwsc] 3 JWS files being processed for module weboutput [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\DailyRun.java Validated. [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\PendingRegistrationsSweep.java Validated. [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\RegistrationsGoLive.java Validated. [jwsc] Compiling 6 source files to C:\DOCUME~1\KEVIN~1.BRE\LOCALS~1\Temp\_5l950r [jwsc] An exception has occurred in the compiler (1.6.0_23). Please file a bug at the Java Developer Connection (http://java.sun.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. [jwsc] java.lang.IllegalAccessError: tried to access class com.sun.tools.javac.jvm.ClassReader$AnnotationDefaultCompleter from class com.sun.tools.javac.jvm.ClassReader [jwsc] at com.sun.tools.javac.jvm.ClassReader.attachAnnotationDefault(ClassReader.java:1128) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMemberAttr(ClassReader.java:906) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMemberAttrs(ClassReader.java:1027) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMethod(ClassReader.java:1490) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readClass(ClassReader.java:1586) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readClassFile(ClassReader.java:1658) [jwsc] at com.sun.tools.javac.jvm.ClassReader.fillIn(ClassReader.java:1845) [jwsc] at com.sun.tools.javac.jvm.ClassReader.complete(ClassReader.java:1777) [jwsc] at com.sun.tools.javac.code.Symbol.complete(Symbol.java:386) [jwsc] at com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:763) [jwsc] at com.sun.tools.javac.jvm.ClassReader.loadClass(ClassReader.java:1951) [jwsc] at com.sun.tools.javac.comp.Resolve.loadClass(Resolve.java:842) [jwsc] at com.sun.tools.javac.comp.Resolve.findIdentInPackage(Resolve.java:1011) [jwsc] at com.sun.tools.javac.comp.Attr.selectSym(Attr.java:1921) [jwsc] at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:1835) [jwsc] at com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1522) [jwsc] at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:360) [jwsc] at com.sun.tools.javac.comp.Attr.attribType(Attr.java:390) [jwsc] at com.sun.tools.javac.comp.MemberEnter.attribImportType(MemberEnter.java:681) [jwsc] at com.sun.tools.javac.comp.MemberEnter.visitImport(MemberEnter.java:545) [jwsc] at com.sun.tools.javac.tree.JCTree$JCImport.accept(JCTree.java:495) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:387) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:399) [jwsc] at com.sun.tools.javac.comp.MemberEnter.visitTopLevel(MemberEnter.java:512) [jwsc] at com.sun.tools.javac.tree.JCTree$JCCompilationUnit.accept(JCTree.java:446) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:387) [jwsc] at com.sun.tools.javac.comp.MemberEnter.complete(MemberEnter.java:819) [jwsc] at com.sun.tools.javac.code.Symbol.complete(Symbol.java:386) [jwsc] at com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:763) [jwsc] at com.sun.tools.javac.comp.Enter.complete(Enter.java:464) [jwsc] at com.sun.tools.javac.comp.Enter.main(Enter.java:442) [jwsc] at com.sun.tools.javac.main.JavaCompiler.enterTrees(JavaCompiler.java:819) [jwsc] at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:727) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:353) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:279) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:270) [jwsc] at com.sun.tools.javac.Main.compile(Main.java:69) [jwsc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [jwsc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:56) [jwsc] at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1097) [jwsc] at weblogic.wsee.tools.anttasks.DelegatingJavacTask$ExposingJavac.compile(DelegatingJavacTask.java:343) [jwsc] at weblogic.wsee.tools.anttasks.DelegatingJavacTask.compile(DelegatingJavacTask.java:286) [jwsc] at weblogic.wsee.tools.anttasks.JwscTask.javac(JwscTask.java:335) [jwsc] at weblogic.wsee.tools.anttasks.JwsModule.compile(JwsModule.java:390) [jwsc] at weblogic.wsee.tools.anttasks.JwsModule.build(JwsModule.java:262) [jwsc] at weblogic.wsee.tools.anttasks.JwscTask.execute(JwscTask.java:227) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442) [jwsc] at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at com.bea.workshop.cmdline.antlib.AntExTask.execute(AntExTask.java:406) [jwsc] at com.bea.workshop.cmdline.antlib.AntCallExTask.execute(AntCallExTask.java:118) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at com.bea.workshop.cmdline.antlib.AntExTask.execute(AntExTask.java:406) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:217) [jwsc] at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:197) [jwsc] at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) [jwsc] at net.sf.antcontrib.logic.ForTask.doSequentialIteration(ForTask.java:259) [jwsc] at net.sf.antcontrib.logic.ForTask.doToken(ForTask.java:268) [jwsc] at net.sf.antcontrib.logic.ForTask.doTheTasks(ForTask.java:299) [jwsc] at net.sf.antcontrib.logic.ForTask.execute(ForTask.java:244) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442) [jwsc] at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.Main.runBuild(Main.java:801) [jwsc] at org.apache.tools.ant.Main.startAnt(Main.java:218) [jwsc] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) [jwsc] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109) [AntUtil.deleteDir] Deleting directory C:\DOCUME~1\KEVIN~1.BRE\LOCALS~1\Temp_5l950r The Ant target that uses the <jwsc> task is this: <target name="webservice.build" depends="init,generated.root.init"> <path id="jwsc.srcpath"> <path path="${java.sourcepath}" /> <pathelement path="build/assembly/.src" /> </path> <taskdef name="jwsc" classname="weblogic.wsee.tools.anttasks.JwscTask" > <classpath> <path refid="weblogic.jar.classpath" /> </classpath> </taskdef> <property name="jwsc.module.root" value="${project.dir}/build/weboutput"/> <property name="jwsc.contextpath" value="irWeb"/> <property name="jwsc.srcpath.prop" refid="jwsc.srcpath"/> <path id="jwsc.classpath"> <path refid="weblogic.jar.classpath" /> <path refid="java.classpath" /> <pathelement path="${java.outpath}" /> </path> <jwsc destdir="${project.dir}/build" classpathref="jwsc.classpath"> <module name="weboutput" explode="true" contextPath="${jwsc.contextpath}" > <jwsFileSet srcdir="${webservices.dir}" type="JAXRPC"> <include name="**/*.java"/> </jwsFileSet> <descriptor file="${jwsc.module.root}/WEB-INF/web.xml" /> <descriptor file="${jwsc.module.root}/WEB-INF/weblogic.xml" /> </module> </jwsc> </target> I have no idea what could be causing the compiler to throw this error at build time, and a day of google searching has turned up other instances of this error caused by different triggers, and solutions for those propblems didn't work for me. I also found a single report on the Oracle forums that seemed to be a carbon copy of this issue, but there were no replies. The application is written in Weblogic Workshop 10, runs on Weblogic Server 10.3, and uses Beehive / NetUI. Not sure if that would make a difference or not though. The build scripts were automatically generated by Weblogic Workshop, with some tweaks and fixes made to other aspects of the files by myself to fix other compatability issues. I am using Java 1.6.0_23 from Sun, and Ant 1.8.1 Any help or advice would be greatly appreciated.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639  | Next Page >