Search Results

Search found 12775 results on 511 pages for 'remote validation'.

Page 221/511 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Anti-Forgery Request Helpers for ASP.NET MVC and jQuery AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, this is a little crazy Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Submit token via AJAX The browser side problem is, if server side turns on anti-forgery validation for POST, then AJAX POST requests will fail be default. Problem For AJAX scenarios, when request is sent by jQuery instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The tokens are printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called somewhere. Now the browser has token in HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token. Here $.appendAntiForgeryToken() is provided:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by iframe, while the token is in the parent window. Here window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • PHP, MySQL, jQuery, AJAX: json data returns correct response but frontend returns error

    - by Devner
    Hi all, I have a user registration form. I am doing server side validation on the fly via AJAX. The quick summary of my problem is that upon validating 2 fields, I get error for the second field validation. If I comment first field, then the 2nd field does not show any error. It has this weird behavior. More details below: The HTML, JS and Php code are below: HTML FORM: <form id="SignupForm" action=""> <fieldset> <legend>Free Signup</legend> <label for="username">Username</label> <input name="username" type="text" id="username" /><span id="status_username"></span><br /> <label for="email">Email</label> <input name="email" type="text" id="email" /><span id="status_email"></span><br /> <label for="confirm_email">Confirm Email</label> <input name="confirm_email" type="text" id="confirm_email" /><span id="status_confirm_email"></span><br /> </fieldset> <p> <input id="sbt" type="button" value="Submit form" /> </p> </form> JS: <script type="text/javascript"> $(document).ready(function() { $("#email").blur(function() { var email = $("#email").val(); var msgbox2 = $("#status_email"); if(email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: "email="+ email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } }); } return false; }); $("#confirm_email").blur(function() { var confirm_email = $("#confirm_email").val(); var email = $("#email").val(); var msgbox3 = $("#status_confirm_email"); if(confirm_email.length > 3) { $.ajax({ type: 'POST', url: 'check_ajax2.php', data: 'confirm_email='+ confirm_email + '&email=' + email, dataType: 'json', cache: false, success: function(data) { if(data.success == 'y') { alert('Available'); } else { alert('Not Available'); } } , error: function (data) { alert('Some error'); } }); } return false; }); }); </script> PHP code: <?php //check_ajax2.php if(isset($_POST['email'])) { $email = $_POST['email']; $res = mysql_query("SELECT uid FROM members WHERE email = '$email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { $success = 'y'; $msg_email = 'Email available'; } else { $success = 'n'; $msg_email = 'Email is already in use.</font>'; } print json_encode(array('success' => $success, 'msg_email' => $msg_email)); } if(isset($_POST['confirm_email'])) { $confirm_email = $_POST['confirm_email']; $email = ( isset($_POST['email']) && trim($_POST['email']) != '' ? $_POST['email'] : '' ); $res = mysql_query("SELECT uid FROM members WHERE email = '$confirm_email' "); $i_exists = mysql_num_rows($res); if( 0 == $i_exists ) { if( isset($email) && isset($confirm_email) && $email == $confirm_email ) { $success = 'y'; $msg_confirm_email = 'Email available and match'; } else { $success = 'n'; $msg_confirm_email = 'Email and Confirm Email do NOT match.'; } } else { $success = 'n'; $msg_confirm_email = 'Email already exists.'; } print json_encode(array('success' => $success, 'msg_confirm_email' => $msg_confirm_email)); } ?> THE PROBLEM: As long as I am validating the $_POST['email'] as well as $_POST['confirm_email'] in the check_ajax2.php file, the validation for confirm_email field always returns an error. With my limited knowledge of Firebug, however, I did find out that the following were the responses when I entered email and confirm_email in the fields: RESPONSE 1: {"success":"y","msg_email":"Email available"} RESPONSE 2: {"success":"y","msg_email":"Email available"}{"success":"n","msg_confirm_email":"Email and Confirm Email do NOT match."} Although the RESPONSE 2 shows that we are receiving the correct message via msg_confirm_email, in the front end, the alert 'Some error' is popping up (I have enabled the alert for debugging). I have spent 48 hours trying to change every part of the code wherever possible, but with only little success. What is weird about this is that if I comment the validation for $_POST['email'] field completely, then the validation for $_POST['confirm_email'] field is displaying correctly without any errors. If I enable it back, it is validating email field correctly, but when it reaches the point of validating confirm_email field, it is again showing me the error. I have also tried renaming success variable in check_ajax2.php page to other different names for both $_POST['email'] and $_POST['confirm_email'] but no success. I will be adding more fields in the form and validating within the check_ajax2.php page. So I am not planning on using different ajax pages for validating each of those fields (and I don't think it's smart to do it that way). I am not a jquery or AJAX guru, so all help in resolving this issue is highly appreciated. Thank you in advance.

    Read the article

  • Linux router: ping doesn't route back

    - by El Barto
    I have a Debian box which I'm trying to set up as a router and an Ubuntu box which I'm using as a client. My problem is that when the Ubuntu client tries to ping a server on the Internet, all the packets are lost (though, as you can see below, they seem to go to the server and back without problem). I'm doing this in the Ubuntu Box: # ping -I eth1 my.remote-server.com PING my.remote-server.com (X.X.X.X) from 10.1.1.12 eth1: 56(84) bytes of data. ^C --- my.remote-server.com ping statistics --- 13 packets transmitted, 0 received, 100% packet loss, time 12094ms (I changed the name and IP of the remote server for privacy). From the Debian Router I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 7, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 8, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 8, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 9, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 9, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 10, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 10, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 11, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 11, length 64 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel # tcpdump -i eth2 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 213, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 213, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 214, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 214, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 215, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 215, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 216, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 216, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 217, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 217, length 64 ^C 10 packets captured 10 packets received by filter 0 packets dropped by kernel And at the remote server I see this: # tcpdump -i eth0 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 1, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 1, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 2, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 2, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 3, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 3, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 4, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 4, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 5, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 5, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 6, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 6, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 7, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 7, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 8, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 8, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 9, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 9, length 64 18 packets captured 228 packets received by filter 92 packets dropped by kernel Here "X.X.X.X" is my remote server's IP and "Y.Y.Y.Y" is my local network's public IP. So, what I understand is that the ping packets are coming out of the Ubuntu box (10.1.1.12), to the router (10.1.1.1), from there to the next router (192.168.1.1) and reaching the remote server (X.X.X.X). Then they come back all the way to the Debian router, but they never reach the Ubuntu box back. What am I missing? Here's the Debian router setup: # ifconfig eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:105761 errors:0 dropped:0 overruns:0 frame:0 TX packets:48944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40298768 (38.4 MiB) TX bytes:44831595 (42.7 MiB) Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38335992 errors:0 dropped:0 overruns:0 frame:0 TX packets:37097705 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:4260680226 (3.9 GiB) TX bytes:3759806551 (3.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3408 errors:0 dropped:0 overruns:0 frame:0 TX packets:3408 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:358445 (350.0 KiB) TX bytes:358445 (350.0 KiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:2767779 errors:0 dropped:0 overruns:0 frame:0 TX packets:1569477 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3609469393 (3.3 GiB) TX bytes:96113978 (91.6 MiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Ubuntu box (on 10.1.1.12 and 192.168.1.12) Address HWtype HWaddress Flags Mask Iface 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.102 ether NN:NN:NN:NN:NN:NN C eth2 10.1.1.12 ether 00:1e:67:15:2b:f0 C eth1 192.168.1.86 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.2 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.40 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.12 ether 00:1e:67:15:2b:f1 C eth2 192.168.1.77 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.41 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.123 ether NN:NN:NN:NN:NN:NN C eth2 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.1.1.0/24 !10.1.1.0/24 MASQUERADE all -- !10.1.1.0/24 10.1.1.0/24 Chain OUTPUT (policy ACCEPT) target prot opt source destination And here's the Ubuntu box: # ifconfig eth0 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f1 inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28785139 errors:0 dropped:0 overruns:0 frame:0 TX packets:19050735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32068182803 (32.0 GB) TX bytes:6061333280 (6.0 GB) Interrupt:16 Memory:b1a00000-b1a20000 eth1 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f0 inet addr:10.1.1.12 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:285086 errors:0 dropped:0 overruns:0 frame:0 TX packets:12719 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30817249 (30.8 MB) TX bytes:2153228 (2.1 MB) Interrupt:16 Memory:b1900000-b1920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:86048 errors:0 dropped:0 overruns:0 frame:0 TX packets:86048 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11426538 (11.4 MB) TX bytes:11426538 (11.4 MB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.1.1 0.0.0.0 UG 100 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.8.0.0 192.168.1.10 255.255.255.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Debian box (on 10.1.1.1 and 192.168.1.10) Address HWtype HWaddress Flags Mask Iface 192.168.1.70 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.97 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.103 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.13 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.120 (incomplete) eth0 192.168.1.111 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.51 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.102 (incomplete) eth0 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.74 (incomplete) eth0 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.121 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.71 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.88 (incomplete) eth0 192.168.1.82 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.98 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.73 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.11 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.85 (incomplete) eth0 192.168.1.112 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.81 ether NN:NN:NN:NN:NN:NN C eth0 10.1.1.1 ether 94:0c:6d:82:0d:98 C eth1 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.10 ether 6c:f0:49:a4:47:38 C eth0 192.168.1.86 (incomplete) eth0 192.168.1.119 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth1 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth0 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination Edit: Following Patrick's suggestion, I did a tcpdump con the Ubuntu box and I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 1, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 1, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 2, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 2, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 3, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 3, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 4, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 4, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 5, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 5, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 6, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 6, length 64 ^C 12 packets captured 12 packets received by filter 0 packets dropped by kernel So the question is: if all packets seem to be coming and going, why does ping report 100% packet loss?

    Read the article

  • Installing the Updated XP Mode which Requires no Hardware Virtualization

    - by Mysticgeek
    Good news for those of you who have a computer without Hardware Virtualization, Microsoft had dropped the requirement so you can now run XP Mode on your machine. Here we take a look at how to install it and getting working on your PC. Microsoft has dropped the requirement that your CPU supports Hardware Virtualization for XP Mode in Windows 7. Before this requirement was dropped, we showed you how to use SecureAble to find out if your machine would run XP Mode. If it couldn’t, you might have gotten lucky with turning Hardware Virtualization on in your BIOS, or getting an update that would enable it. If not, you were out of luck or would need a different machine. Note: Although you no longer need Hardware Virtualization, you still need Professional, Enterprise, or Ultimate version of Windows 7. Download Correct Version of XP Mode For this article we’re installing it on a Dell machine that doesn’t support Hardware Virtualization on Windows 7 Ultimate 64-bit version. The first thing you’ll want to do is go to the XP Mode website and select your edition of Windows 7 and language. Then there are three downloads you’ll need to get from the page. Windows XP Mode, Windows Virtual PC, and the Windows XP Mode Update (All Links Below). Windows genuine validation is required before you can download the XP Mode files. To make the validation process easier you might want to use IE when downloading these files and validating your version of Windows. Installing XP Mode After validation is successful the first thing to download and install is XP Mode, which is easy following the wizard and accepting the defaults. The second step is to install KB958559 which is Windows Virtual PC.   After it’s installed, a reboot is required. After you’ve come back from the restart, you’ll need to install KB977206 which is the Windows XP Mode Update.   After that’s installed, yet another restart of your system is required. After the update is configured and you return from the second reboot, you’ll find XP Mode in the Start menu under the Windows Virtual PC folder. When it launches accept the license agreement and click Next. Enter in your log in credentials… Choose if you want Automatic Updates or not… Then you’re given a message saying setup will share the hardware on your computer, then click Start Setup. While setup completes, you’re shown a display of what XP Mode does and how to use it. XP Mode launches and you can now begin using it to run older applications that are not compatible with Windows 7. Conclusion This is a welcome news for many who want the ability to use XP Mode but didn’t have the proper hardware to do it. The bad news is users of Home versions of Windows still don’t get to enjoy the XP Mode feature officially. However, we have an article that shows a great workaround – Create an XP Mode for Windows 7 Home Versions & Vista. Download XP Mode, Windows Virtual PC, and Windows XP Mode Update Similar Articles Productive Geek Tips Our Look at XP Mode in Windows 7Run XP Mode on Windows 7 Machines Without Hardware VirtualizationInstall XP Mode with VirtualBox Using the VMLite PluginUnderstanding the New Hyper-V Feature in Windows Server 2008How To Run XP Mode in VirtualBox on Windows 7 (sort of) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • Add Zune Desktop Player to Windows 7 Media Center

    - by DigitalGeekery
    Are you a Zune owner who prefers the Zune player for media playback? Today we’ll show you how to integrate the Zune player with WMC using Media Center Studio. You’ll need to download Media Center Studio and the Zune Desktop player software. (See download links below) Also, make sure you have Media Center closed. Some of the actions in Media Center Studio cannot be performed while WMC is open. Open Media Center Studio and click on the Start Menu tab at the top of the application.   Click the Application button. Here we will create an Entry Point for the Zune player so that we can add it to Media Center. Type in a name for your entry point in the title text box. This is the name that will appear under the tile when added to the Media Center start menu. Next, type in the path to the Zune player. By default this should be C:\Program Files\Zune\Zune.exe. Note: Be sure to use the original path, not a link to the desktop icon.   The Active image is the image that will appear on the tile in Media Center. If you wish to change the default image, click the Browse button and select a different image. Select Stop the currently playing media from the When launched do the following: dropdown list.  Otherwise, if you open Zune player from WMC while playing another form of media, that media will continue to play in the background.   Now we will choose a keystroke to use to exit the Zune player software and return to Media Center. Click on the the green plus (+) button. When prompted, press a key to use to the close the Zune player. Note: This may also work with your Media Center remote. You may want to set a keyboard keystroke as well as a button on your remote to close the program. You may not be able to set certain remote buttons to close the application. We found that the back arrow button worked well. You can also choose a keystroke to kill the program if desired. Be sure to save your work before exiting by clicking the Save button on the Home tab.   Next, select the Start Menu tab and click on the next to Entry points to reveal the available entry points. Find the Zune player tile in the Entry points area. We want to drag the tile out onto one of the menu strips on the start menu. We will drag ours onto the Extras Library strip. When you begin to drag the tile, green plus (+) signs will appear in between the tiles. When you’ve dragged the tile over any of the green plus signs, the  red “Move” label will turn to a blue “Move to” label. Now you can drop the tile into position. Save your changes and then close Media Center Studio. When you open Media Center, you should see your Zune tile on the start menu. When you select the Zune tile in WMC, Media Center will be minimized and Zune player will be launched. Now you can enjoy your media through the Zune player. When you close Zune player with the previously assigned keystroke or by clicking the “X” at the top right, Windows Media Center will be re-opened. Conclusion We found the Zune player worked with two different Media Center remotes that we tested. It was a times a little tricky at times to tell where you were when navigating through the Zune software with a remote, but it did work. In addition to managing your music, the Zune player is a nice way to add podcasts to your Media Center setup. We should also mention that you don’t need to actually own a Zune to install and use the Zune player software. Media Center Studio works on both Vista and Windows 7. We covered Media Center Studio a bit more in depth in a previous post on customizing the Windows Media Center start menu. Are you new to Zune player? Familiarize yourself a bit more by checking out some of our earlier posts like how to update your Zune player, and experiencing your music a whole new way with Zune for PC.   Downloads Zune Desktop Player download Media Center Studio download Similar Articles Productive Geek Tips How To Rip a Music CD in Windows 7 Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • New Product: Oracle Java ME Embedded 3.2 – Small, Smart, Connected

    - by terrencebarr
    The Internet of Things (IoT) is coming. And, with todays launch of the Oracle Java ME Embedded 3.2 product, Java is going to play an even greater role in it. Java in the Internet of Things By all accounts, intelligent embedded devices are penetrating the world around us – driving industrial processes, monitoring environmental conditions, providing better health care, analyzing and processing data, and much more. And these devices are becoming increasingly connected, adding another dimension of utility. Welcome to the Internet of Things. As I blogged yesterday, this is a huge opportunity for the Java technology and ecosystem. To enable and utilize these billions of devices effectively you need a programming model, tools, and protocols which provide a feature-rich, consistent, scalable, manageable, and interoperable platform.  Java technology is ideally suited to address these technical and business problems, enabling you eliminate many of the typical challenges in designing embedded solutions. By using Java you can focus on building smarter, more valuable embedded solutions faster. To wit, Java technology is already powering around 10 billion devices worldwide. Delivering on this vision and accelerating the growth of embedded Java solutions, Oracle is today announcing a brand-new product: Oracle Java Micro Edition (ME) Embedded 3.2, accompanied by an update release of the Java ME Software Development Kit (SDK) to version 3.2. What is Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a complete Java runtime client, optimized for ARM architecture connected microcontrollers and other resource-constrained systems. The product provides dedicated embedded functionality and is targeted for low-power, limited memory devices requiring support for a range of network services and I/O interfaces.  What features and APIs are provided by Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228). The runtime and virtual machine (VM) are highly optimized for embedded use. Also included in the product are the following optional JSRs and Oracle APIs: File I/O API’s (JSR-75)  Wireless Messaging API’s (JSR-120) Web Services (JSR-172) Security and Trust Services subset (JSR-177) Location API’s (JSR-179) XML API’s (JSR-280)  Device Access API Application Management System (AMS) API AccessPoint API Logging API Additional embedded features are: Remote application management system Support for continuous 24×7 operation Application monitoring, auto-start, and system recovery Application access to peripheral interfaces such as GPIO, I2C, SPIO, memory mapped I/O Application level logging framework, including option for remote logging Headless on-device debugging – source level Java application debugging over IP Connection Remote configuration of the Java VM What type of platforms are targeted by Oracle Java ME 3.2 Embedded? The product is designed for embedded, always-on, resource-constrained, headless (no graphics/no UI), connected (wired or wireless) devices with a variety of peripheral I/O.  The high-level system requirements are as follows: System based on ARM architecture SOCs Memory footprint (approximate) from 130 KB RAM/350KB ROM (for a minimal, customized configuration) to 700 KB RAM/1500 KB ROM (for the full, standard configuration)  Very simple embedded kernel, or a more capable embedded OS/RTOS At least one type of network connection (wired or wireless) The initial release of the product is delivered as a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN).  What types of applications can I develop with Oracle Java ME Embedded 3.2? The Oracle Java ME Embedded 3.2 product is a full-featured embedded Java runtime supporting applications based on the IMP-NG application model, which is derived from the well-known MIDP 2 application model. The runtime supports execution of multiple concurrent applications, remote application management, versatile connectivity, and a rich set of APIs and features relevant for embedded use cases, including the ability to interact with peripheral I/O directly from Java applications. This rich feature set, coupled with familiar and best-in class software development tools, allows developers to quickly build and deploy sophisticated embedded solutions for a wide range of use cases. Target markets well supported by Oracle Java ME Embedded 3.2 include wireless modules for M2M, industrial and building control, smart grid infrastructure, home automation, and environmental sensors and tracking. What tools are available for embedded application development for Oracle Java ME Embedded 3.2? Along with the release of Oracle Java ME Embedded 3.2, Oracle is also making available an updated version of the Java ME Software Development Kit (SDK), together with plug-ins for the NetBeans and Eclipse IDEs, to deliver a complete development environment for embedded application development.  OK – sounds great! Where can I find out more? And how do I get started? There is a complete set of information, data sheet, API documentation, “Getting Started Guide”, FAQ, and download links available: For an overview of Oracle Embeddable Java, see here. For the Oracle Java ME Embedded 3.2 press release, see here. For the Oracle Java ME Embedded 3.2 data sheet, see here. For the Oracle Java ME Embedded 3.2 landing page, see here. For the Oracle Java ME Embedded 3.2 documentation page, including a “Getting Started Guide” and FAQ, see here. For the Oracle Java ME SDK 3.2 landing and download page, see here. Finally, to ask more questions, please see the OTN “Java ME Embedded” forum To get started, grab the “Getting Started Guide” and download the Java ME SDK 3.2, which includes the Oracle Java ME Embedded 3.2 device emulation.  Can I learn more about Oracle Java ME Embedded 3.2 at JavaOne and/or Java Embedded @ JavaOne? Glad you asked Both conferences, JavaOne and Java Embedded @ JavaOne, will feature a host of content and information around the new Oracle Java ME Embedded 3.2 product, from technical and business sessions, to hands-on tutorials, and demos. Stay tuned, I will post details shortly. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "Oracle Java ME Embedded", Connected, embedded, Embedded Java, Java Embedded @ JavaOne, JavaOne, Smart

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Mozilla Weave can't sync Firefox. What's wrong?

    - by Mehper C. Palavuzlar
    For the last few days, Mozilla Weave can't sync. Below is the activity log. Any ideas? 2010-05-02 20:47:15 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-02 20:47:15 Service.Main CONFIG Starting backoff, next sync at:Sun May 02 2010 21:16:09 GMT+0300 (GTB Yaz Saati) 2010-05-02 20:47:15 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-02 21:16:09 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 21:16:30 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 21:16:30 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 21:26:50 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 21:26:50 Engine.Clients INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Clients INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Clients DEBUG Total (ms): sync 6, processIncoming 3, uploadOutgoing 0, syncStartup 3, syncFinish 0 2010-05-02 21:26:50 Engine.Bookmarks INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Bookmarks INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Bookmarks DEBUG Total (ms): sync 13, processIncoming 5, uploadOutgoing 0, syncStartup 3, syncFinish 3 2010-05-02 21:26:50 Engine.Forms INFO 1 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Forms INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Forms INFO Uploading all of 1 records 2010-05-02 21:26:50 Collection DEBUG POST Length: 388 2010-05-02 21:27:06 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/forms 2010-05-02 21:27:06 Engine.Forms DEBUG Total (ms): sync 15924, processIncoming 3, uploadOutgoing 15918, syncStartup 3, syncFinish 0, createRecord 1 2010-05-02 21:27:06 Engine.History INFO 55 outgoing items pre-reconciliation 2010-05-02 21:27:06 Engine.History INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:09 Engine.History INFO Uploading all of 55 records 2010-05-02 21:27:09 Collection DEBUG POST Length: 35337 2010-05-02 21:27:32 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/history 2010-05-02 21:27:32 Engine.History DEBUG Total (ms): sync 25588, processIncoming 4, uploadOutgoing 25580, syncStartup 3, syncFinish 0, createRecord 2540 2010-05-02 21:27:32 Engine.Passwords INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Passwords INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Passwords DEBUG Total (ms): sync 8, processIncoming 4, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Prefs INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Prefs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Prefs DEBUG Total (ms): sync 8, processIncoming 3, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Tabs INFO 1 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Tabs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Tabs INFO Uploading all of 1 records 2010-05-02 21:27:32 Collection DEBUG POST Length: 393 2010-05-02 21:27:54 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/tabs 2010-05-02 21:27:54 Engine.Tabs DEBUG Total (ms): sync 21943, processIncoming 3, uploadOutgoing 21936, syncStartup 3, syncFinish 0, createRecord 8 2010-05-02 21:27:54 Service.Main INFO Sync completed successfully 2010-05-02 22:27:53 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 22:28:14 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 22:28:14 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 22:28:16 Net.Resource DEBUG GET fail 503 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 22:28:16 Service.Main DEBUG Exception: aborting sync, failed to get collections No traceback available 2010-05-02 23:28:15 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-03 00:26:42 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 00:31:03 RecordMgr DEBUG Failed to import record: App. Quitting JS Stack trace: Res__request(...)@resource.js:208 < Res_get()@resource.js:271 < RecordMgr_import("https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global")@wbo.js:119 < WeaveSvc__remoteSetup()@service.js:824 < ()@service.js:1187 < WrappedNotify()@util.js:114 < WrappedLock()@util.js:86 < WrappedCatch()@util.js:65 < sync(false)@service.js:1146 < ([object Object])@service.js:414 < notify([object XPCWrappedNative_NoHelper])@util.js:629 2010-05-03 00:31:03 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2010-05-03 00:31:03 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-03 00:31:03 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-03 17:26:25 Service.Main INFO Loading Weave 1.2.3 2010-05-03 17:26:25 Engine.Bookmarks DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Forms DEBUG Engine initialized 2010-05-03 17:26:25 Engine.History DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Passwords DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Prefs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Resetting tabs last sync time 2010-05-03 17:26:25 Service.Main INFO Mozilla/5.0 (Windows; U; Windows NT 6.1; tr; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) 2010-05-03 17:26:26 Service.Main DEBUG Caching URLs under storage user base: https://sj-weave03.services.mozilla.com/1.0/mehper/ 2010-05-03 17:26:30 Service.Main DEBUG Autoconnecting in 3 seconds 2010-05-03 17:26:36 Service.Main INFO Logging in user mehper 2010-05-03 17:45:46 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 17:53:18 Service.Main DEBUG Exception: Could not acquire lock No traceback available

    Read the article

  • "Host key verification failed" error when transfering files using SCP command

    - by rvsi
    When I am trying to transfer files using SCP command I'm getting this error (Removed my IP and RSA key): @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is ------------------------(RSA key) Please contact your system administrator. Add correct host key in /home/users/myaccount/.ssh/known_hosts to get rid of this message. Offending key in /home/users/myaccount/.ssh/known_hosts:4 RSA host key for 'my IP' has changed and you have requested strict checking. Host key verification failed. lost connection I am using newly installed Ubuntu 12.04 and I can connect to this server using ssh. Any help?

    Read the article

  • Add a Sleep Timer to Windows 7 Media Center

    - by DigitalGeekery
    Do you make it a habit of falling asleep at night while watching Windows Media Center? Today we are going to take a look at the MC7 Sleep Timer for Windows 7 Media Center. This simple little plugin allows you to schedule an automatic shutdown time in Media Center. Note: At this point MC7 Sleep Timer doesn’t work with extenders. If you’re using ClamAV or Panda it may detect this plugin as a virus, we’ve tested it and this is a false positive for these two antivirus apps. Installation and Usage Download and install MC7 Sleep Timer. (See download below) After the installation is finished, you will find MC7 Sleep Timer located in the Media Center Extras Library. Click on the tile to open the timer and configure your settings. The MC7 Sleep Timer will open in full screen mode. You can choose to shutdown the PC after 30 or 60 minutes, create a custom length shutdown timer at any 5 minute interval, or select the exact time you want the PC to shutdown.  After setting your PC to shutdown, you’ll get an audio confirmation. To set a custom timer length, scroll to the “Custom timer” option and click right or left on your Media Center remote or, the right or left arrow keys, to choose how many minutes before shutdown. To schedule a shutdown for a certain time, browse to the “Shutdown at time” button, and scroll right or left with the arrow keys on the keyboard or remote. When you’ve chosen your time, hit “Enter” on the keyboard or “OK” on the remote.   Clicking the “Monitor Off” button will turn off only the monitor and “Cancel Timer” will cancel your shutdown request. Conclusion If you often find yourself falling asleep every night watching Media Center and then fumbling and stumbling in the middle of the night to shutdown your computer, MC7 Sleep timer might just be a perfect addition to your Media Center setup. Download MC7 Sleep Timer Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Re-Enable Sleep Mode in Windows VistaSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use My TextTools to Edit and Organize Text Discovery Channel LIFE Theme (Win7) Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor

    Read the article

  • Maximize Your Quadcopter’s Range with a Wi-Fi Repeater

    - by Jason Fitzpatrick
    The majority of commercial quadcopters use Wi-Fi for remote control and suffer from a fairly limited range. This simple hack uses an Wi-Fi router as an extender to radically expand the range of your copter. There’s no heavy modification or code tweaking required, all you need is a power source for the router and the ability to set it up as a repeater. The extra signal boost provided by the repeater extends the range from an average of 50 meters to over 250 meters. Check out the video above to see it in action. If you’re looking for a more dependable but more labor intensive way to extend the range of your copter, you can also retrofit it with a traditional radio-controlled remote. [via Hack A Day] HTG Explains: Is UPnP a Security Risk? How to Monitor and Control Your Children’s Computer Usage on Windows 8 What Happened to Solitaire and Minesweeper in Windows 8?

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • On VirtualBox Guest OS “Could not initialize GLX"

    - by trivelt
    I have a remote build-machine with Jenkins and I'm trying to run GUI application. In Jenkins I installed Xvnc plugin, which uses TightVNC Server, but each build has failed. Earlier, there was a problem with loading driver swrast (by libGL), currently in the log there is this line: [Error] Could not initialize GLX Remote desktop is Ubuntu 14.04 running over VirtualBox, so I installed VBoxAddons but it didn't resolve the problem. Below I'm putting some logs, maybe helpful for you. $ cat /var/log/Xorg.0.log | grep GL [ 20.545] (==) AIGLX enabled [ 20.545] Loading extension GLX [ 20.913] (EE) AIGLX error: vboxvideo does not export required DRI extension [ 20.914] (EE) AIGLX: reverting to software rendering [ 21.615] (II) AIGLX: Loaded and initialized swrast [ 21.615] (II) GLX: Initialized DRISWRAST GL provider for screen 0 $ lsmod | grep box vboxsf 43786 0 vboxpci 23194 0 vboxnetadp 25670 0 vboxnetflt 27613 0 vboxdrv 339502 3 vboxnetadp,vboxnetflt,vboxpci vboxvideo 12658 0 vboxguest 248441 3 vboxsf drm 302817 1 vboxvideo $ lspci | grep VGA 00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter Any ideas what should I do?

    Read the article

  • Lync Server 2010

    - by ManojDhobale
    Microsoft Lync Server 2010 communications software and its client software, such as Microsoft Lync 2010, enable your users to connect in new ways and to stay connected, regardless of their physical location. Lync 2010 and Lync Server 2010 bring together the different ways that people communicate in a single client interface, are deployed as a unified platform, and are administered through a single management infrastructure. Workload Description IM and presence Instant messaging (IM) and presence help your users find and communicate with one another efficiently and effectively. IM provides an instant messaging platform with conversation history, and supports public IM connectivity with users of public IM networks such as MSN/Windows Live, Yahoo!, and AOL. Presence establishes and displays a user’s personal availability and willingness to communicate through the use of common states such as Available or Busy. This rich presence information enables other users to immediately make effective communication choices. Conferencing Lync Server includes support for IM conferencing, audio conferencing, web conferencing, video conferencing, and application sharing, for both scheduled and impromptu meetings. All these meeting types are supported with a single client. Lync Server also supports dial-in conferencing so that users of public switched telephone network (PSTN) phones can participate in the audio portion of conferences. Conferences can seamlessly change and grow in real time. For example, a single conference can start as just instant messages between a few users, and escalate to an audio conference with desktop sharing and a larger audience instantly, easily, and without interrupting the conversation flow. Enterprise Voice Enterprise Voice is the Voice over Internet Protocol (VoIP) offering in Lync Server 2010. It delivers a voice option to enhance or replace traditional private branch exchange (PBX) systems. In addition to the complete telephony capabilities of an IP PBX, Enterprise Voice is integrated with rich presence, IM, collaboration, and meetings. Features such as call answer, hold, resume, transfer, forward and divert are supported directly, while personalized speed dialing keys are replaced by Contacts lists, and automatic intercom is replaced with IM. Enterprise Voice supports high availability through call admission control (CAC), branch office survivability, and extended options for data resiliency. Support for remote users You can provide full Lync Server functionality for users who are currently outside your organization’s firewalls by deploying servers called Edge Servers to provide a connection for these remote users. These remote users can connect to conferences by using a personal computer with Lync 2010 installed, the phone, or a web interface. Deploying Edge Servers also enables you to federate with partner or vendor organizations. A federated relationship enables your users to put federated users on their Contacts lists, exchange presence information and instant messages with these users, and invite them to audio calls, video calls, and conferences. Integration with other products Lync Server integrates with several other products to provide additional benefits to your users and administrators. Meeting tools are integrated into Outlook 2010 to enable organizers to schedule a meeting or start an impromptu conference with a single click and make it just as easy for attendees to join. Presence information is integrated into Outlook 2010 and SharePoint 2010. Exchange Unified Messaging (UM) provides several integration features. Users can see if they have new voice mail within Lync 2010. They can click a play button in the Outlook message to hear the audio voice mail, or view a transcription of the voice mail in the notification message. Simple deployment To help you plan and deploy your servers and clients, Lync Server provides the Microsoft Lync Server 2010, Planning Tool and the Topology Builder. Lync Server 2010, Planning Tool is a wizard that interactively asks you a series of questions about your organization, the Lync Server features you want to enable, and your capacity planning needs. Then, it creates a recommended deployment topology based on your answers, and produces several forms of output to aid your planning and installation. Topology Builder is an installation component of Lync Server 2010. You use Topology Builder to create, adjust and publish your planned topology. It also validates your topology before you begin server installations. When you install Lync Server on individual servers, the installation program deploys the server as directed in the topology. Simple management After you deploy Lync Server, it offers the following powerful and streamlined management tools: Active Directory for its user information, which eliminates the need for separate user and policy databases. Microsoft Lync Server 2010 Control Panel, a new web-based graphical user interface for administrators. With this web-based UI, Lync Server administrators can manage their systems from anywhere on the corporate network, without needing specialized management software installed on their computers. Lync Server Management Shell command-line management tool, which is based on the Windows PowerShell command-line interface. It provides a rich command set for administration of all aspects of the product, and enables Lync Server administrators to automate repetitive tasks using a familiar tool. While the IM and presence features are automatically installed in every Lync Server deployment, you can choose whether to deploy conferencing, Enterprise Voice, and remote user access, to tailor your deployment to your organization’s needs.

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • logout when running firefox over ssh

    - by Jacques MALAPRADE
    this is very strange and has never happened before. When I ssh to a remote computer at my uni and try and run firefox (even with -no-remote etc.) it automatically logs me out of my local computer. When I log in again it automatically runs teamviewer ( suspicious!!! ). It also came up with a "network service discovery disabled" error similar to this post when logging back in: network service discovery disabled. I am running ubuntu 12.04 on a hp pavillion laptop. I would appreciate if someone can tell me what I am doing wrong....

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • how do i run teamviewer in a headless system?

    - by Serge
    im really new to ubuntu 12.04 and ive been trying to get remote desktop services between me and my server. and i dont have any spare monitors, keyboards, or mice. i did connect said items too the server and got teamviewer running and connected. but i cant use those components every time i want to go remote desktop. i have ssh but i want a GUI on hand for annoying stressful situations. i read somewhere that ubuntu doesnt start a gui if there is no monitor connected. and tried those xorg.conf setups to no avail. i believe this is the reason why my headlesds setup isnt working. how can i get my GUI to startup while headless?

    Read the article

  • How to change/create password keyring

    - by sadmicrowave
    when I try to remote desktop to the server from another ubuntu machine using the remote desktop viewer, it asks me to enter the password, which I do, then the viewer pane just goes black. When I come back and look at my server it is saying that the password keyring no longer matches the password used to login to the machine please reenter the password...and when I type in the password it doesnt take it, it just keeps popping back up saying the same message over and over. I found a thread explaining to go to System--Preferences--Passwords & Encryptions and right click on the keyring and click Set as Default. I did that and the problem persists...I tried changing the password but it told me that my original password was incorrect (even though it is the password I use to login and provide root authentication when asked) so I deleted the keyring in hopes of adding a new one but there is no place in gui to add a new one...so can I add a new one through command line? if so - how?

    Read the article

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available

    - by Max Arderius
    A new Rapid Install startCD (Patch 18086193) for Oracle E-Business Suite Release 12.2 is now available. We recommend that all EBS customers installing or upgrading to EBS 12.2 use this latest update. The startCD updates are distributed to customers via My Oracle Support Patch which can be uncompressed on top of any previous 12.2 startCD under the main staging area. This patch replaces any previous startCDs. What's New in This Update? This new startCD version 12.2.0.48 includes important fixes for multi-node Installs, RAC, pre-install checks, platform specific issues, and upgrade scenario failures: 18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD 18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE 18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES 18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES 18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB 18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB 18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND 18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK 18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING 18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1 18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT ) 18383075 - ADD VERBOSE OPTION TO RAC VALIDATION 18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4 18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE 18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER 18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC 18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER 18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD 18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP 18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE 18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING 18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE 18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3 18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING 18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47 18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC 18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES 18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS 18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH> 18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE 18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS 18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6 18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS 18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK 18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED 18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED 17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH 17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010 17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN 17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW 17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING 17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS 17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN 17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF 17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION 17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED 17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK 17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK 17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED 17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX 17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK 17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE 17630972 - /TMP PRE-REQ INSTALLATION CHECK 17617245 - 12.2 VISION INSTALL FAILS ON AIX 17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION 17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2 17588765 - CHECKER VERSION AND PLUGIN VERSION 17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX 17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS 17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL References 12.2 Documentation Library 1581299.1 : EBS 12.2 Product Information Center 1320300.1 : Oracle E-Business Suite Release Notes, Release 12.2 1606170.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for Release 12.2.3 1624423.1 : Oracle E-Business Suite Technology Stack and Applications DBA Release Notes for R12.TXK.C.Delta.4 and R12.AD.C.Delta.4 1594274.1 : Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Related Articles Oracle E-Business Suite 12.2 Now Available startCD options to install Oracle E-Business Suite Release 12.2

    Read the article

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • AutoSSH for a robust tunnel

    - by Budric
    I'm trying to start an ssh tunnel from A to B and have it run despite things like: period network/wifi drops on A and remote server reboot on B. My ssh tunnel starts using upstart script on A with event start on (net-device-up IFACE=eth0) I've found autossh which is supposed to handle these kinds of things, but had some trouble getting it to work. The upstart executes: autossh -M 0 -2qTN -o "ServerAliveInterval 30" -o "ServerAliveCountMax 2" -L 5678:somehost:5678 user@B However when I log into B and kill -9 that tunnel session, autossh just exits with "Connection to B closed by remote host." That's not what I expected autossh to do. Any advice on how to set this up? Any GUI service monitoring utilities out there that essentially display a green light if a service is up? Thanks.

    Read the article

  • Actor library / framework for C++

    - by Giorgio
    In the C++ project I am working for we would like to use something like Scala actors and remote actors (see e.g. this tutorial). Being able to use remote actors (actors living in different processes, possibly on different machines and communicating via TCP/IP) has higher priority for us because we have an application consisting of several processes deployed on different machines. Being able to use several actors living in the same process (possibly different threads) is also interesting, but has lower priority for the moment. On wikipedia I have found some links to actor libraries for C++ and I have started to look at Theron. Before I dive too deep into the details and build an extended example with Theron, I wanted to ask if anybody has experience with any of these libraries and which one they would recommend.

    Read the article

  • Sync csv file using nodejs

    - by Amit Dugar
    There is a remote csv file that gets updated every second or so. I need to download it(on a Windows machine) ONCE and always sync local file with the remote one. Obviously, downloading the whole file every time is not an option. I need to download only the changes.(something like rsync, rdiff-backup) I searched quite a bit but could not find how I can do this. I am sort of new to nodejs and am using this app as an opportunity to expand my nodejs skills. Also, I am planning to use nodejs and to package it using node-webkit(https://github.com/rogerwang/node-webkit)

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >