Search Results

Search found 22296 results on 892 pages for 'url icon'.

Page 200/892 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Receiving AJAX HTTP Response Code as 0

    - by shyam
    I have a pretty simple AJAX and PHP code. While calling the PHP through the AJAX it receives the response code as 0. The PHP code is successfully run, but I can't get the response. What does this status '0' denote and how can I solve this? function confirmUser(id) { xmlhttp=GetXmlHttpObject(); regid = id; if (xmlhttp==null) { alert ("Browser does not support HTTP Request"); return; } var url="confirm.php"; url=url+"?id="+id; url=url+"&a=confirm"; xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { $("#txtHint" + regid).text("Awaiting confirmation"); } else { alert (xmlhttp.status); //this shows '0' } }; xmlhttp.open("GET",url,true); xmlhttp.send(null); } Well, this is the javascript I used. Pardon me if I should've added anything more than this. Also tell me what I missed. I appreciate your help

    Read the article

  • how to get email id from google api response

    - by user1726508
    i am able to get user information from Google API response using oath2 . But i do't know how to get those responses individually . Response i am getting from Google Api: * Access token: ya29.AHES6ZQ3QxKxnfAzpZasdfd23423NuxJs29gMa39MXV551yMmyM5IgA { "id": "112361893525676437860", "name": "Ansuman Singh", "given_name": "Ansuman", "family_name": "Singh", "link": "https://plus.google.com/112361893525676437860", "gender": "male", "birthday": "0000-03-18", "locale": "en" } Original Token: ya29.AHES6ZQ3QxKxnfAzpZu0lYHYu8sdfsdafdgMa39MXV551yMmyM5IgA New Token: ya29.AHES6ZQ3QxKxnfdsfsdaYHYu8TNuxJs29gMa39MXV551yMmyM5IgA But i want only "id" & "name" indiviually to save in my Database table. How can i do this? I got those above response/output By using the below code. public static void main(String[] args) throws IOException { ------------------------- ------------------------- ------------------------- String accessToken = authResponse.accessToken; GoogleAccessProtectedResource access = new GoogleAccessProtectedResource(accessToken, TRANSPORT, JSON_FACTORY, CLIENT_ID, CLIENT_SECRET, authResponse.refreshToken); HttpRequestFactory rf = TRANSPORT.createRequestFactory(access); System.out.println("Access token: " + authResponse.accessToken); String url = "https://www.googleapis.com/oauth2/v1/userinfo?alt=json&access_token=" + authResponse.accessToken; final StringBuffer r = new StringBuffer(); final URL u = new URL(url); final URLConnection uc = u.openConnection(); final int end = 1000; InputStreamReader isr = null; BufferedReader br = null; isr = new InputStreamReader(uc.getInputStream()); br = new BufferedReader(isr); final int chk = 0; while ((url = br.readLine()) != null) { if ((chk >= 0) && ((chk < end))) { r.append(url).append('\n'); } } System.out.print(""); System.out.println(); System.out.print(" "+ r ); //this is printing at once but i want them individually access.refreshToken(); System.out.println("Original Token: " + accessToken + " New Token: " + access.getAccessToken()); }

    Read the article

  • How do I automatically update hundreds of images in an HTML page using jquery?

    - by Chris
    I have an HTML page where I want to refresh a lot of images every 30 seconds after the HTML page has been downloaded. I understand how to do this with Jquery and a single image, but I want to use about 200 custom urls to determine the current image to display for over 200 images. I need to find an efficient way to have jquery call the custom url associated with each image to download the url for the needed image as it changes, and then update the image in the page when it changes. Current hyperlink example to demonstrate the custom urls. <A href="/urlThatReturnsCurrentImageURL/1234/4567">link to url for image</A> Each custom url will return an image tag like this (or any other text that makes this simpler for jquery) <img src="/static/someImage.jpg"> What is the simplest way to have jquery call the custom url for each image to download the image url, image html, or some other text that jquery can use to download the right image every 30 seconds? Please keep in mind that I will have about 200 of these on a page.

    Read the article

  • Query decendants in XML to Linq

    - by Gordon
    I have the following xml data: <portfolio> <item> <title>Site</title> <description>Site.com is a </description> <url>http://www.site.com</url> <photos> <photo url="http://www.site.com/site/thumbnail.png" thumbnail="true" description="Main" /> <photo url="http://www.site.com/site/1.png" thumbnail="false" description="Main" /> </photos> </item> </portfolio> In c# I am using the following link query: List list = new List(); XDocument xmlDoc = XDocument.Load(HttpContext.Current.Server.MapPath("~/app_data/portfolio.xml")); list = (from portfolio in xmlDoc.Descendants("item") select new PortfolioItem() { Title = portfolio.Element("title").Value, Description = portfolio.Element("description").Value, Url = portfolio.Element("url").Value }).ToList(); How do I go about querying the photos node? In the PortfolioItem class I have a property: List Photos {get;set;} Any ideas would be greatly appreciated!

    Read the article

  • Why is the word PERSONAL still relevant in the term PC? [closed]

    - by Bill
    I have spent half an hour trying to change an icon on my Win-7-64 machine (Why Can't I Change the Icon). One reasonable suggestion (reasonable in terms of having a solution, not reasonable in terms of having to jump through these hoops for such a basic requirement) was to delete the old icon from the %userprofile% \ Local Settings..., however when I click on this folder in Windows Explorer I am told the folder is not accessible - Access Denied. Well! It's my PERSONAL computer isn't it? Isn't that what PC stands for? It's MY computer - why can't I get access to that folder? It's about time we started calling these machines MCs (Microsoft Computer), or WCs (Windows Computer) - because they sure as hell aint PERSONAL damn computers!!!!

    Read the article

  • jquery plugin: creation

    - by user1542535
    The output am expecting is an unordered list which am creating with jquery...which takes in put from a json file (which works fine when i dont create it as a plugin). Am very new with the concept of building a plugin. i've tried to create one which doesnt output my unordered list json file structure { "Categories": [ { "cat_id":"1", "name":"Main Menu1", "sub_categories":[ { "cat_id":"10", "name":" Sub Menu11", "sub_level_one_link":"http:\/\/one.com" }, my js file //create plugin jQuery.fn.emrMenu= function (options) { myoptions = jQuery.extend ({ url: "error" }, options); if (myoptions.url=="error") { alert("Error:No data recieved"); return false; } $(this).html (myoptions.url); return this.each (function () { //alert(myoptions.url+this.id); $.getJSON(myoptions.url, function(data) { $.each(data.Categories, function(i, category) { alert("test1"); //get all sub menu items in list indexes var submenudata=''; $.each(category.sub_categories, function(i, sub_categories) { submenudata += "<li><a href='"+sub_categories.sub_level_one_link+"' <span>"+sub_categories.name+"</span></a></li>"; }); var menudata ="<li id='"+category.cat_id+"' class='has-sub '><a href='#'><span>"+category.name+"</span></a><ul>"+submenudata+"</ul></li>"; //stringify unordered list and bind to div var menu="<ul>"+menudata+"</ul>"; // $(menu).appendTo("#"this.id); }); }); //alert (this.id); }); } and am calling the plugin: <script> $(document).ready(function() { $('#menu_n').emrMenu ({ url: "menu_data.json"}); }); </script> I'am pretty confused at this point any help is greatly appreciated cheers!

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • No Preview Images in File Open Dialogs on Windows 7

    - by Rick Strahl
    I’ve been updating some file uploader code in my photoalbum today and while I was working with the uploader I noticed that the File Open dialog using Silverlight that handles the file selections didn’t allow me to ever see an image preview for image files. It sure would be nice if I could preview the images I’m about to upload before selecting them from a list. Here’s what my list looked like: This is the Medium Icon view, but regardless of the views available including Content view only icons are showing up. Silverlight uses the standard Windows File Open Dialog so it uses all the same settings that apply to Explorer when displaying content. It turns out that the Customization options in particular are the problem here. Specifically the Always show icons, never thumbnails option: I had this option checked initially, because it’s one of the defenses against runaway random Explorer views that never stay set at my preferences. Alas, while this setting affects Explorer views apparently it also affects all dialog based views in the same way. Unchecking the option above brings back full thumbnailing for all content and icon views. Here’s the same Medium Icon view after turning the option off: which obviously works a whole lot better for selection of images. The bummer of this is that it’s not controllable at the dialog level – at least not in Silverlight. Dialogs obviously have different requirements than what you see in Explorer so the global configuration is a bit extreme especially when there are no overrides on the dialog interface. Certainly for Silverlight the ability to have previews is a key feature for many applications since it will be dealing with lots of media content most likely. Hope this helps somebody out. Thanks to Tim Heuer who helped me track this down on Twitter.© Rick Strahl, West Wind Technologies, 2005-2010Posted in Silverlight  Windows  

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • How to enable Google Drive offline access

    - by Gopinath
    Google’s latest cloud offering Google Drive provides 5GB of free storage to let you store documents, spread sheets, photos and other stuff and access them using a variety of devices – PCs, Macs, smartphones and tablets. You can also set up offline access to Google Drive so that you can access files on the move even if you don’t have access to internet connection. To access Google Drive offline you need Chrome browser and here are the simple steps to be followed for setting up. Step 1:  Login to Google Drive and click the gear icon in the upper right of your window. Step 2: Select Set up Docs offline from the drop-down menu. The “Set up offline viewing of Google Docs” dialog will appear Step 3:  Authorize Google Chrome to store your Google Drive content by clicking on “Allow offline docs” and then install “Docs Chrome web app” by clicking on “Install from Chrome web store”. You’ll be taken to the Chrome web store, where you’ll need to click Install on the right-hand side of the browser window. Step 4: Once the app is installed, you’ll be taken to a Chrome page with the Google Docs app icon. Click the icon to go back to your Documents List. Google Chrome take few minutes to prepare Google Drive for offline access by downloading all the files to your local computer. Once it’s completed, you can access Google Drive files offline. To access files of Google Drive offline point your Chrome browser to drive.google.com. When offline your Google Docs stored on Google Drive are available in view only mode. You can open Google Documents, Spread sheets & Presentations and see the content but you can’t edit them.

    Read the article

  • How to Add the Windows Calculator to the Quick Access Toolbar in Microsoft Excel 2013

    - by Lori Kaufman
    Do you use the Windows Calculator to perform quick calculations while building spreadsheets in Excel? You can save time by adding the Calculator to the Quick Access Toolbar in Excel so you don’t have to leave the program to access the Calculator. To do this, click the down arrow on the right side of the Quick Access Toolbar and select More Commands from the drop-down menu. On the Quick Access Toolbar screen on the Excel Options dialog box, select Commands Not in the Ribbon from the Choose commands from drop-down list. Scroll down in the long list and select Calculator. Click Add to add the Calculator to the Quick Access Toolbar. Click OK to accept the change and close the Excel Options dialog box. You’ll see a Calculator icon on the Quick Access Toolbar. When you move your mouse over the icon, a hint displays saying “Custom.” Despite the label, when you click the icon, the Windows Calculator opens. The same procedure works for adding the Windows Calculator to Excel 2010, as well.     

    Read the article

  • CMOTECH D-50 modem installation in Ubuntu 12.04

    - by Ricardo
    I have recently upgraded from 10.04 to 12.04. I had installed a 3G D-50 modem from CMOTECH. The program for Debian is provided by a Swedish company (ice.net). Usually after some mambo jambo of installung the libg++ libraries requested you can install it and runned in Ubuntu 8.04, 9.04, 10.04 and as far as I know in 11.04. When I upgraded and I click on the icon of ice.net it worked. However, I noticed that the usb D-50 modem was never mounted as usb and it didn't show up in the Lauchnpad or workspace (as when you plug a usb memory stick or another HD). I moved the icon from place in the launchpad and since when I click on the ice.net icon the same message appears: "please plug in your modem". The modem works (I've tested in Windows after this) and it blinks blue (sign that it works and picks up signal). If I type lsusb then I see that Ubuntu sees it on BUS address 006: Bus 006 Device 003: ID 16d8:6803 CMOTECH Co., Ltd. CNU-680 CDMA EV-DO modem I've tried wvdial without success. How can I get the D-50 usb modem mounted as in the previous versions of Ubuntu ?. Any help will be much appreciated. bests. Ricardo

    Read the article

  • Missing Print Option in Google Maps? Here Is How You Find it.

    - by Gopinath
    Summary: Wondering how to print driving directions on new version of Google Directions as there is no print icon on maps home page? Its buried under layers and you can access it by  searching for required directions first and then click on List all steps to see Print icon on top right section. The classic version of Google Maps had  Print option in a prominent location on home page and it was easy to access it to print driving directions. But in the new version of Google Maps, for good or bad Google decided to remove Print option from the home screen.This left many Google Maps users wondering how to print driving directions. In the new version of Google Maps, the Print option is displayed only when you view the full list of driving directions. Here are the simple steps to be followed for printing driving directions on new Google Maps Open Google Maps. Search for directions and click List all steps in the directions card. Click printer icon in the top right corner. Tada!! You found a way to print your directions. Here is an nice animation of instructions to print directions provided by Google    

    Read the article

  • Why can't I run Compiz?

    - by jasoncruz98
    I installed Ubuntu 11.10 on my laptop. The main reason I switched to Ubuntu is because I wanted to use Compiz. The first thing I did was to go to Additional Drivers and install ATI/AMD Proprietary FGRLX Graphics Driver. There was also another one available, ATI/AMD Proprietary FGRLX Graphics Driver (post-release updates), but I didn't install that one, because it basically meant the same thing to me as the one I already installed. Next, I went to the ubuntuguide.org Oneiric Wiki http://ubuntuguide.org/wiki/Ubuntu:Oneiric#Compiz_Fusion So I followed the instructions there and ran this command in terminal: sudo apt-get install compiz compizconfig-settings-manager compiz-fusion-plugins-main compiz-fusion-plugins-extra emerald librsvg2-common But then, the terminal window said that the package "emerald" could not be found. So, I ran this command instead: sudo apt-get install compiz compizconfig-settings-manager compiz-fusion-plugins-main compiz-fusion-plugins-extra After that, I installed Fusion Icon by running this command: sudo apt-get install fusion-icon I restarted my computer, searched for Compiz Config Settings Manager, and clicked on it. Then, I activated Wobbly Windows. I logged off and logged back on again, but there was no wobbly windows effect. So I tried clicking on Fusion Icon, but it never started. Can someone please tell me what I did wrong here? Because I see everyone seems to be able to run Compiz except me. I really need to start Compiz, or else I think I'm going to uninstall Ubuntu.

    Read the article

  • unity, seeing all instances of same open application windows on all virtual desktop

    - by Nasser M. Abbasi
    I noticed this strange issue with unity. I am using 12.04. The desktop has 4 virtual desktops, which I can switch between using the 'workspace switcher' which is very nice. But I noticed the following: When I have 2 instances of the same app (say 2 different firefox windows, or 2 different terminal windows), in 2 different virtual desktops, then I click on the icon for that application located on the launcher panel (the left long strip with icons on it), then I see the application comes into focus. Then when I click again right away (on the same icon on the launcher), then now all instances of this application that are open come into ONE view (may be on was on desktop 1, and the other was on desktop 3 for example) and then I can now click on the one instance window that I want to select to use. This is all very nice actually. But this does NOT work for all applications! I just tried it, and it worked for firefox, and for gedit and for the gnome terminal. I have one firefox window open in virtual desktop 1, and another window open in virtual desktop 2. I clicked once on the firefox icon, then again, and both windows came into the main desktop and I was able to select which one to use. When I tried the same thing on dolphin file manager, which I also had 2 windows (instances) of it open in 2 different virtual desktops, this behavior did not happen. I clicked again, and nothing happened. Only one remained in focus. So I had to fo look for the second dolphin window the hard way. It looks like some apps are supported by this feature and some are not. How does one make it so that all applications are supported like this? This is a very handy feature. Is it a configuration item somewhere? thanks

    Read the article

  • Chrome Time Track Is a Simple Task Time Tracker

    - by ETC
    If you don’t need advanced project tracking but still want to track how long it takes you to finish certain tasks or projects, Chrome Time Tracker is a free Chrome extension with a dead simple interface. Install the extension and a Time Track icon appears in your toolbar. Click it to create new tasks, delete old tasks, and start, stop, and reset your task timers. When the timer is running the icon changes to indicate you’re on the clock; pause the timer and it toggles back to the default icon. Hit up the link below to grab a copy. Chrome Time Track is free and works wherever Google Chrome does. Chrome Time Track [Google Chrome Extensions] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • How do I get a rt2800usb wireless device working?

    - by Jii
    My brand new desktop running 13.04 has endless problems with wireless. Dozens of others are flooding forums with reports of the same problems. It worked fine for a few days, then there were a few days where it started having problems sometimes and working sometimes. Now it never works at all. I have 5+ devices all able to connect without any trouble at all, including iPhone, Android phone, 3DS, multiple game consoles, a laptop running windows 7, and even a second desktop machine running Ubuntu 12.04 sitting right behind the 13.04 machine. All other devices have full wireless bars displayed (strong signals). At any moment, one of the following is happening, and it changes randomly: Trying to connect forever, but never establishing a connection. Wireless icon constantly animating. Finds no wireless networks at all. (There are 12+ in range according to other devices.) Will not try to connect to the network. If I use the icon to connect, it will display "Disconnected" within a few seconds. Will continuously ask for the network password. Typing it in correctly does not help. Wireless is working fine. This happens sometimes. It can work for days at a time, or only 10 mins at a time. Various things that usually do nothing but sometimes fix the problem: Reboot. This has the best chance of helping, but it usually takes 5+ times. Disable/re-enable Wi-Fi using the wireless icon. Disable/re-enable Networking using the wireless icon. Use the icon to try and connect to a network (if found). Use the icon to open Edit Connections and delete my connection info, causing it to be recreated (once it's actually found again). Various things that seem to make no difference: Changing between using Linux headers in grub at bootup, between 3.10.0, 3.9.0, or 3.8.0. Move the wireless router very close to the desktop. Running sudo rfkill unblock all (I dunno what this is supposed to do.) I've used Ubuntu for 6 years and I've never had a problem with networking. Now I'm spending all my time reading through endless problem reports and trying all the answers. None of them have helped. I am doing this instead of getting work done, which is defeating the whole purpose of using Ubuntu. It's heartbreaking to be honest. In the current state of "no networks are showing up", here are outputs from the random things that other people are usually asked to run: lspic 00:00.0 Host bridge: Intel Corporation Haswell DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Haswell PCI Express x16 Controller (rev 06) 00:14.0 USB controller: Intel Corporation Lynx Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Lynx Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04) 00:1a.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Lynx Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #1 (rev d4) 00:1c.2 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d4) 00:1d.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Lynx Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Lynx Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1) 01:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1) 03:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03) lsmod Module Size Used by e100 41119 0 nls_iso8859_1 12713 1 parport_pc 28284 0 ppdev 17106 0 bnep 18258 2 rfcomm 47863 12 binfmt_misc 17540 1 arc4 12573 2 rt2800usb 27201 0 rt2x00usb 20857 1 rt2800usb rt2800lib 68029 1 rt2800usb rt2x00lib 55764 3 rt2x00usb,rt2800lib,rt2800usb coretemp 13596 0 mac80211 656164 3 rt2x00lib,rt2x00usb,rt2800lib kvm_intel 138733 0 kvm 452835 1 kvm_intel cfg80211 547224 2 mac80211,rt2x00lib crc_ccitt 12707 1 rt2800lib ghash_clmulni_intel 13259 0 aesni_intel 55449 0 usb_storage 61749 1 aes_x86_64 17131 1 aesni_intel joydev 17613 0 xts 12922 1 aesni_intel nouveau 1001310 3 snd_hda_codec_hdmi 37407 1 lrw 13294 1 aesni_intel gf128mul 14951 2 lrw,xts mxm_wmi 13021 1 nouveau snd_hda_codec_realtek 46511 1 ablk_helper 13597 1 aesni_intel wmi 19256 2 mxm_wmi,nouveau snd_hda_intel 44397 5 ttm 88251 1 nouveau drm_kms_helper 49082 1 nouveau drm 295908 5 ttm,drm_kms_helper,nouveau snd_hda_codec 190010 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel cryptd 20501 3 ghash_clmulni_intel,aesni_intel,ablk_helper snd_hwdep 13613 1 snd_hda_codec snd_pcm 102477 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel btusb 18291 0 snd_page_alloc 18798 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 i2c_algo_bit 13564 1 nouveau snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30417 1 snd_seq_midi snd_seq 61930 2 snd_seq_midi_event,snd_seq_midi bluetooth 251354 22 bnep,btusb,rfcomm snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi lpc_ich 17060 0 snd_timer 29989 2 snd_pcm,snd_seq mei 46588 0 snd 69533 20 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device psmouse 97838 0 microcode 22923 0 soundcore 12680 1 snd video 19467 1 nouveau mac_hid 13253 0 serio_raw 13215 0 lp 17799 0 parport 46562 3 lp,ppdev,parport_pc hid_generic 12548 0 usbhid 47346 0 hid 101248 2 hid_generic,usbhid ahci 30063 3 libahci 32088 1 ahci e1000e 207005 0 ptp 18668 1 e1000e pps_core 14080 1 ptp sudo lshw -c network 00:00.0 Host bridge: Intel Corporation Haswell DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Haswell PCI Express x16 Controller (rev 06) 00:14.0 USB controller: Intel Corporation Lynx Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Lynx Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04) 00:1a.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Lynx Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #1 (rev d4) 00:1c.2 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d4) 00:1d.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Lynx Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Lynx Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1) 01:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1) 03:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03) sudo iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on sudo iwlist scan eth0 Interface doesn't support scanning. lo Interface doesn't support scanning. wlan0 No scan results NOTE: This dmesg was done after a reboot where the network manager was continuously displaying the "disconnected" message over and over. So it must have been trying to connect at this time. My network was displayed in the list of options, as the only option despite other devices picking up 12+ access points. The router channel is set to auto. dmesg | tail -30 [ 187.418446] wlan0: associated [ 190.405601] wlan0: disassociated from 00:14:d1:a8:c3:44 (Reason: 15) [ 190.443312] cfg80211: Calling CRDA to update world regulatory domain [ 190.443431] wlan0: deauthenticating from 00:14:d1:a8:c3:44 by local choice (reason=3) [ 190.451635] cfg80211: World regulatory domain updated: [ 190.451643] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 190.451648] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 190.451652] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 190.451656] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 190.451659] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 190.451662] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 191.824451] wlan0: authenticate with 00:14:d1:a8:c3:44 [ 191.850608] wlan0: send auth to 00:14:d1:a8:c3:44 (try 1/3) [ 191.884604] wlan0: send auth to 00:14:d1:a8:c3:44 (try 2/3) [ 191.886309] wlan0: authenticated [ 191.886579] rt2800usb 3-5.3:1.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 191.886588] rt2800usb 3-5.3:1.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 191.889556] wlan0: associate with 00:14:d1:a8:c3:44 (try 1/3) [ 192.001493] wlan0: associate with 00:14:d1:a8:c3:44 (try 2/3) [ 192.040274] wlan0: RX AssocResp from 00:14:d1:a8:c3:44 (capab=0x431 status=0 aid=3) [ 192.044235] wlan0: associated [ 193.948188] wlan0: deauthenticating from 00:14:d1:a8:c3:44 by local choice (reason=3) [ 193.981501] cfg80211: Calling CRDA to update world regulatory domain [ 193.984080] cfg80211: World regulatory domain updated: [ 193.984082] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 193.984084] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 193.984085] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 193.984085] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 193.984086] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 193.984087] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) The router uses MAC filtering, and security is WPA PSK with cipher as auto. So, any ideas? Or is the solution just to not use 13.04 unless you have a wired connection? (I don't have this option.) If so, please just tell me straight. I survived 9.04 Jaunty, and I can survive 13.04 Raring. Update #1 Results from trying Wild Man's first answer: jii@conan:~$ echo "options rt2800usb nohwcrypt=y" | sudo tee /etc/modprobe.d/rt2800usb.conf options rt2800usb nohwcrypt=y jii@conan:~$ sudo modprobe -rfv rt2800usb rmmod rt2800usb rmmod rt2800lib rmmod crc_ccitt rmmod rt2x00usb rmmod rt2x00lib rmmod mac80211 rmmod cfg80211 jii@conan:~$ sudo modprobe -v rt2800usb insmod /lib/modules/3.10.0-031000-generic/kernel/lib/crc-ccitt.ko insmod /lib/modules/3.10.0-031000-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.10.0-031000-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2x00lib.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2800lib.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2x00usb.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2800usb.ko nohwcrypt=y I tried: gksudo gedit /etc/pm/power.d/wireless but I didn't have the package. It said to install gksu. I tried that, but of course, not having Internet, I didn't get the package. So instead I did: sudo gedit /etc/pm/power.d/wireless Which created the file. Here is the body: #!/bin/sh /sbin/iwconfig wlan0 power off I then rebooted. No change. I tried adding exit 0 to the bottom of the wireless file, and rebooted. No change. Please note that this is a desktop machine. I'm assuming power management is primarily for laptops, but the iwconfig does state that power management is on, so who knows. The recommended router changes I did not do, since the current router settings are (I think) required for some of the older devices I have, and because the current settings work on all my modern devices including Ubuntu 12.04 and Windows 7. I do appreciate the advice though, and I'll look into it when I have time. Anything else to try? Update #2 I booted into Ubuntu 12.04.3 from a dvd, and the same problems exist. I have a separate old desktop machine with 12.04 installed that has no wireless problems at all. So obviously the problem is wireless hardware compatibility in both 12.04.03 LTS and 13.04. Update #3 The same problems exist even when using a wired connection. I plugged an ethernet cable directly to the router and the network manager added an "Auto Ethernet" entry, but it cannot establish a connection to it. So the problem is not specific to wireless. Meanwhile, I purchased a Trendnet N300 wireless USB adapter, TEW-664UB. I plugged it in, but I have no idea how to get Ubuntu to try and use it. Can anyone tell me how? Can I download a package on another computer and copy the .deb over to do an install, etc? I'm installing windows 7 to double check that the internet connection works there and it's not just some magically faulty hardware. Thanks for your help.

    Read the article

  • Problems with icons and shutting down Ubuntu 12.04

    - by Anders
    I recently upgraded from 11-10 to 12.04 as a dual boot (Windows 7) and am having problems with shutting down. I installed from a CD that I burned from the iso file, and to get it to boot from the CD, I needed to install the special help program from Windows that would then recognize the CD as the system booted.The installation gave me my own user account as well as a guest account. For the User account there is no "gear" icon (in the upper RH side) from which to access the shut down menu. Interestingly the icons for the Home folder, the Ubuntu One folder, and the System Settings folder are missing, although there are blank places shows these names if the mouse cursor is positioned over these areas. They will even launch with the press of a key - but no icons are visible. For the Guest account, all of these icons are visible and usable. The problem that occurs in shutting down is that I need to leave the User account, move to the Guest account (so that I can access the top right "gear" icon that has the shut down menu item) and press the shut down button. The problem here is that when the shut own menu appears and I press the confirmation that I wish to shut down the computer, the page blanks (as one might hope in the process of closing down), and then pops up with the log in menu, giving the option of logging in as a User or as a Guest. So the questions are: 1) how do you reinstate the far right top icon from which you can access the shut down drop down menu in the User account page. 2) how do you get the icons to display properly on the Left hand side (Home, Ubuntu One, System Settings, and Workspace Switcher) 3) how do you get the system to turn of when you press the shut down button! Boy oh boy! Thanks a bunch for any help!

    Read the article

  • Chainloading GRUB2 from BURG

    - by WindowsEscapist
    I have an old PC with Puppy Linux in addition to Ubuntu and Windows XP. THis creates a LOT of menu entries (all of which I would like to keep): Ubuntu 10.04 Ubuntu Recovery Mode Memtest x86 Memtest Serial Windows XP Pro Precise Puppy Linux Precise Puppy TORAM Puppy 4.3.1 Puppy 4.3.1 TORAM Plop Boot Manager (for booting to USB, PC doesn't have BIOS option). Now, in my fancy shiny laptop I've gotten really attached to BURG, and would like a setup where I have a Windows icon, an Ubuntu icon, and an arrow that chainloads GRUB2 so that I can boot from USB or run Puppy if need be (all these entries will obviously not fit into the BURG theme I use, Lightness). The problem is, GRUB2 can't install the the beginning of a partition like it used to be able to (I am reluctant to specify andything with --force at the end), at least, without warning that warn: This is a BAD idea!. So I'm kind of at a loss here. I can't see how the folding option would work, because all of those other options would have the same icon once I unfolded (Lightness is non-text-based). If I do embed GRUB using grub-install /dev/sdax --force, how do I chainload it with BURG? Is there another way?

    Read the article

  • Hide icons encrypted file system partitions in Nautilus

    - by Eddy Pronk
    I've installed Ubuntu 10.04 from the alternate CD. It has an encrypted root and swap partition. The root partition is visible in Nautilus as 'File Syste' icon. There is another icon "216 GB Filesystem". If I click it says: Unable to mount 216 GB Filesystem. /dev/mapper/sda5_crypt is mounted. Then there is another icon "6.1 GB Swap Space". If I click it it says: Unable to mount 6.1 GB Swap Space. Not a mountable file system. How can I hide these last two icons? Partition layout: $ sudo fdisk -l /dev/sda [sudo] password for eddyp: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xa6e92df4 Device Boot Start End Blocks Id System /dev/sda1 1 11749 94373811 7 HPFS/NTFS /dev/sda2 11871 38914 217219073 5 Extended /dev/sda3 * 11750 11871 976896 83 Linux /dev/sda5 11871 38167 211220480 83 Linux /dev/sda6 38167 38914 5997568 83 Linux Partition table entries are not in disk order Mounted as: $ mount /dev/mapper/sda5_crypt on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda3 on /boot type ext4 (rw) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/eddyp/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=eddyp) /dev/sda1 on /media/S3A6595D003 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions)

    Read the article

  • Burg Custom Icons work only with specific themes

    - by el10780
    I have made a custom icon for burg loader for my Lenovo Recovery Partition.I have made 3 icons : large_qdrive.png (128 X 128 pixels) small_qdrive.png (24 X 24 pixels) grey_qdrive.png (128 x 128 pixels) The .png icons that I created I made them using gimp from a qdrive.ico file that I found in the Lenovo Recovery Partition. I transferred the icons to the /boot/burg/themes/icons folder and I added to the class list of the grey,large,small and the hover files the following lines : -qdrive { image = "$$/large_qdrive.png" } in the large file -qdrive { image = "$$/small_qdrive.png" } in the small file -qdrive { image = "$$/grey_qdrive.png" } in the grey file -qdrive { image = "$$/grey_qdrive.png:$$/large_qdrive.png" } in the hover file I ran sudo update-burg and after that I modified the following line in the burg.cfg file : menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { to menuentry "Windows 7 (loader) (on /dev/sda2)" --class qdrive --class os { and I also tried to change the title for the Lenovo Recovery Partition,so I tried this as well: menuentry "Lenovo Recovery Partition (on /dev/sda2)" --class qdrive --class os { None of this tries enforced actually burg loader to use the custom icon that I made and I can't figure out why. I have to mention also that there are a few themes that I have installed in burg which actually are able to use the small_qdrive.png icon that I made,but all the others which use either the large_qdrive.png or the grey_qdrive.png weren't able to use the custom icons. I have double checked for typos in all the files that I have created or I modified,so I am pretty sure that I haven't misspelled anything. I have checked also the title of the custom icons that I made and neither of them have a typo. I have looked also if there are any other folder that the themes might use to retrieve the icons,but it seems that all of them except for **Fortune** theme,which I downloaded from OMG!UBUNTU,use the icons folder which is located in /boot/burg/themes/icons I tried to add the custom icons to the icons folder of the theme **Fortune**,but still nothing happened.

    Read the article

  • Why do the GNOME symbolic icons appear darker in a running application?

    - by David Planella
    I'm creating an application that uses symbolic icons from the default theme. However, there are a few icons that I need that cannot be represented by those from the default theme, so I'm creating my own ones. What I did was to simply go to /usr/share/icons/gnome/scalable/actions/, copied a few locally into my app's source tree that could serve as a basis, and started editing them. So far so good. But I've noticed the following: all symbolic icons are of a light grey color when looking at the original .svg file, but when they are put onto a widget, they become darker. Here's an example, using the /usr/share/icons/gnome/scalable/actions/view-refresh-symbolic.svg icon from the default theme: Here's what it looks like when opening the original with Inkscape: And here's what it looks like on a toolbar on a running application: Notice the icon being much darker at runtime. That happens both with the Ambiance and Radiance themes. I wouldn't mind much, but I noticed it affects my custom icon, whereby parts of it become darker (the inner fill), whereas parts of it remain the same color as the original (the stroke). So what causes the default symbolic icons to darken and how should implement that for my custom icons?

    Read the article

  • Dropbox fails to install on Ubuntu 13.10

    - by Fraxav
    I've done some research and it seems Dropbox has some problems with the status icon on Ubuntu 13.10, but I can't even manage to install it. The very first time, I installed nautilus-dropbox from the app center on a freshly installed Saucy, but I couldn't see the status icon and clicking the app icon wouldn't help, so I uninstalled and rebooted. The second and third time, the app just freezed at about 75% while installing from app manager, so I had to shut down the computer and uninstall. Next I tried to install dropbox from the .deb (from website): it installed (the launcher?) from app manager, then the dialog prompted me to start dropbox and finally to download the official daemon, but the download freezed at 0% and made everything else buggy/laggy (probably also because I restarted nautilus, as required, at that point). After a purge and clean from terminal, some icons and this file (which I suspect is somehow responsible, but of course is locked) are still present on my computer. Can anyone help me figure how how to install it? How could I remove that configuration file so I can give it a last shot?

    Read the article

  • determine an application's process name on linux (ubuntu)

    - by Jacob
    This is the situation: Working on (the next version of) a Unity quicklist editor, I would like to add a reliable way of "restarting" launcher icons. To do so, I need to remove the icon (editing gsettings) and replace it on the same position. So far no problem. However, if the application in question is running, user will possibly lose data, as the application will quit when it's icon is removed from the launcher. What I need is a reliable way to find an application's process name, to let the editor check in the list of running processes if the application is running, and send a warning message to the user that the icon can not be restarted if the application is running. What i did so far is make the editor look into the desktop file, to read the command, also read the command, stripped from the directory section, and furthermore look into possible remote scripts the desktop file command might refer to, looking for strings starting with "./" Although te method seems to work well with all applications I tested it on, I have the feeling there must be an easier way to cover the problem in an "all in one" way... Is there? also suggestions to catch more exceptional situations are welcome!

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >