Search Results

Search found 12935 results on 518 pages for 'non recursive'.

Page 133/518 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • AR242x / AR542x wireless card not working

    - by Pipan87
    My wifi worked perfect until I updated to the latest version of Ubuntu. Now I don't find any wireless connections at all. I have tried lots of guides on the internet but I can't get it to work. I did however start to work once after writing something I don't remember in Terminal, but after rebooting it stopped working again. Some info (don't know if you need more to help): 01:00.0 Ethernet controller: Atheros Communications AR8121/AR8113/AR8114 Gigabit or Fast Ethernet (rev b0) Subsystem: Acer Incorporated [ALI] Device 022c Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at 55200000 (64-bit, non-prefetchable) [size=256K] I/O ports at 3000 [size=128] Capabilities: [40] Power Management version 2 Capabilities: [48] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [58] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [180] Device Serial Number ff-93-2e-de-00-23-8b-ff Kernel driver in use: ATL1E Kernel modules: atl1e 02:00.0 Ethernet controller: Atheros Communications Inc. AR242x / AR542x Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Foxconn International, Inc. Device e00d Flags: bus master, fast devsel, latency 0, IRQ 18 Memory at 54100000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Kernel driver in use: ath5k Kernel modules: ath5k

    Read the article

  • Should sanity be a property of a programmer or a program?

    - by toplel32
    I design and implement languages, that can range from object notations to markup languages. In many cases I have considered restrictions in favor of sanity (common knowledge), like in the case of control characters in identifiers. There are two consequences to consider before doing this: It takes extra computation It narrows liberty I'm interested to learn how developers think of decisions like this. As you may know Microsoft C# is very open on the contrary. If you really want to prefix your integer as Long with 'l' instead of 'L' and so risk other developers of confusing '1' and 'l', no problem. If you want to name your variables in non-latin script so they will contrast with C#'s latin keywords, no problem. Or if you want to distribute a string over multiple lines and so break a series of indentation, no problem. It is cheap to ensure consistency with restrictions and this makes it tempting to implement. But in the case of disallowing non-latin characters (concerning the second example), it means a discredit to Unicode, because one would not take full advantage of its capacity.

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • share code between check and process methods

    - by undu
    My job is to refactor an old library for GIS vector data processing. The main class encapsulates a collection of building outlines, and offers different methods for checking data consistency. Those checking functions have an optional parameter that allows to perform some process. For instance: std::vector<Point> checkIntersections(int process_mode = 0); This method tests if some building outlines are intersecting, and return the intersection points. But if you pass a non null argument, the method will modify the outlines to remove the intersection. I think it's pretty bad (at call site, a reader not familiar with the code base will assume that a method called checkSomething only performs a check and doesn't modifiy data) and I want to change this. I also want to avoid code duplication as check and process methods are mostly similar. So I was thinking to something like this: // a private worker std::vector<Point> workerIntersections(int process_mode = 0) { // it's the equivalent of the current checkIntersections, it may perform // a process depending on process_mode } // public interfaces for check and process std::vector<Point> checkIntersections() /* const */ { workerIntersections(0); } std::vector<Point> processIntersections(int process_mode /*I have different process modes*/) { workerIntersections(process_mode); } But that forces me to break const correctness as workerIntersections is a non-const method. How can I separate check and process, avoiding code duplication and keeping const-correctness?

    Read the article

  • From the Coalface - 3 - Work as hard as you can to be as lazy as you can!

    - by TATWORTH
    The saga of the Change Log A recent conversation reminded me of the need for change logs within a database, to record when various change scripts were run. Creating such the required table is simple. A typical table for this consists of: Id - identity Integer primary key ChangeFileName - NVARCHAR(128) to hold the name of the file run. DateAdded - DateTime non-null with default value of getutcdate() Purpose - NVARCHAR(128) Rerunnable - Bit non-null default 0. By good design of the table only two data values normally need to be supplied. Two stored procedures, one for inserting data and one to list in reverse sequence the log complete the database essentials. The complete implementation can be found in the CommonData solution at http://CommonData.CodePlex.Com By including a call the add Change Log stored procedure, each script can log its name and purpose for posterity. The scripts that were applied to say the UAT system and their sequence of application can be readily identified for running on the Live system. Formatting XML XML is often produced as one continous string with no embedded CR/LF. To get it into human readable form, open it in visual studio, swap to another tab and back and click the format document button. The XML will then be nicely formatted!

    Read the article

  • What's the simplest way to provide a portable, locally running webservice server application?

    - by derFunk
    We have a bigger website running that offers a JsonRpc web service. For offline demonstration purposes I want to realize a portable, locally running webserver with a minimalistic feature replication of the live webservice, and bundle this together with Html files which do Ajax requests to it. This local server executable should have as little dependencies as possible. It's gonna be run and presented by non-devs on non-dev Windows machines, so I would prefer having a simple executable plus the service code - whereas language doesn't matter as long as it is .NET, PHP or Java. I'll need a small database behind which probably will be Sqlite. It's important to say that for some reasons we cannot use the original web service code, but we have to rewrite it new for the local demo server, this is why I want to put minimal effort in the local server tech. An installer for distribution is not mandatory, it's okay to have a zip file with an executable in it which starts up the local webserver. What would you recommend realizing these requirements? I've done some research already, but would love to here your opinions and get some pointers!

    Read the article

  • Creating my own PHP framework

    - by onlineapplab.com
    Disclaimer: I don't want to start any flame war so there will not be no name of any framework mentioned. I've been using quite many from the existing PHP frameworks and my experience in each case was similar: everything is nice a the beginning but in the moment you require something non standard you get into lot of problems to fix otherwise simple issues. In case of frameworks following the MVC design pattern there are some issues with the implementation of each layer for example there is a lot of codding used for model and data access with using ORM and presentation is not much more than pure phtml. Some frameworks use their own wrappers for existing PHP functionality and in some cases severely limiting original functionality. Depending on framework you can have additional problems like lack of documentation, slow or non existent development cycle and last but not least speed. While ago I made my own framework which while doing it's job and being used for few different applications after couple of years more of experience with PHP doesn't seem to be perfect piece of codding. I could write my own framework and use additional experience I've gathered during these years to make it better on the other hand I'm aware that there is quite many better programmers working on creating/upgrading existing frameworks. So does it make at all nay sense to write my own PHP framework if there is so many possibilities to choose from?

    Read the article

  • Recommended design pattern for object with optional and modifiable attributtes? [on hold]

    - by Ikuzen
    I've been using the Builder pattern to create objects with a large number of attributes, where most of them are optional. But up until now, I've defined them as final, as recommended by Joshua Block and other authors, and haven't needed to change their values. I am wondering what should I do though if I need a class with a substantial number of optional but non-final (mutable) attributes? My Builder pattern code looks like this: public class Example { //All possible parameters (optional or not) private final int param1; private final int param2; //Builder class public static class Builder { private final int param1; //Required parameters private int param2 = 0; //Optional parameters - initialized to default //Builder constructor public Builder (int param1) { this.param1 = param1; } //Setter-like methods for optional parameters public Builder param2(int value) { param2 = value; return this; } //build() method public Example build() { return new Example(this); } } //Private constructor private Example(Builder builder) { param1 = builder.param1; param2 = builder.param2; } } Can I just remove the final keyword from the declaration to be able to access the attributes externally (through normal setters, for example)? Or is there a creational pattern that allows optional but non-final attributes that would be better suited in this case?

    Read the article

  • web application with secured sections, sessions and related trouble

    - by spirytus
    I would like to create web application with admin/checkout sections being secured. Assuming I have SSL set up for subdomain.mydomain.com I would like to make sure that all that top-secret stuff ;) like checkout pages and admin section is transferred securely. Would it be ok to structure my application as below? subdomain.mydomain.com adminSectionFolder adminPage1.php adminPage2.php checkoutPagesFolder checkoutPage1.php checkoutPage2.php checkoutPage3.php homepage.php loginPage.php someOtherPage.php someNonSecureFolder nonSecurePage1.php nonSecurePage2.php nonSecurePage3.php imagesFolder image1.jpg image2.jpg image3.jpg Users would access my web application via http as there is no need for SSL for homepage and similar. Checkout/admin pages would have to be accessed via https though (that I would ensure via .htaccess redirects). I would also like to have login form on every page of the site, including non-secure pages. Now my questions are: if I have form on non-secure page e.g http://subdomain.mydomain.com/homepage.php and that form sends data to http://subdomain.mydomain.com/loginPage.php, is data being send encrypted as if it were sent from https://subdomain.mydomain.com/homepage.php? I do realize users will not see padlock, but browser still should encrypt it, is it right? If on secure page loginPage.php (or any other accessed via https for that instance) I created session, session ID would be assigned, and in case of my web app. something like username of the logged in user. Would I be able to access these session variable from http://subdomain.mydomain.com/homepage.php to for example display greeting message? If session ID is stored in cookies then it would be trouble I assume, but could someone clarify how it should be done? It seems important to have username and password send over SSL. Related to above question I think.. would it actually make any sense to have login secured via SSL so usenrame/password would be transferred securely, and then session ID being transferred with no SSL? I mean wouldnt it be the same really if someone caught username and password being transferred, or caught session ID? Please let me know if I make sense here cause it feels like I'm missing something important. EDIT: I came up with idea but again please let me know if that would work. Having above, so assuming that sharing session between http and https is as secure as login in user via plain http (not https), I guess on all non secure pages, like homepage etc. I could check if user is already logged in, and if so from php redirect to https version of same page. So user fills in login form from homepage.php, over ssl details are send to backend so probably https://.../homepage.php. Trying to access http://.../someOtherPage.php script would always check if session is created and if so redirect user to https version of this page so https://.../someOtherPage.php. Would that work? 4.To avoid browser popping message "this page contains non secure items..." my links to css, images and all assets, e.g. in case of http://subdomain.mydomain.com/checkoutPage1.php should be absolute so "/images/image1.jpg" or relative so "../images/image1.jpg"? I guess one of those would have to work :) wow that's long post, thanks for your patience if you got that far and any answers :) oh yeh and I use php/apache on shared hosting

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • nvcc not found, but only when using sudo

    - by dsp_099
    I can't get ANYTHING working on linux. I'm trying to compile CudaMiner. sudo make: ypt-jane.o `test -f 'scrypt-jane.cpp' || echo './'`scrypt-jane.cpp mv -f .deps/cudaminer-scrypt-jane.Tpo .deps/cudaminer-scrypt-jane.Po nvcc -g -O2 -Xptxas "-abi=no -v" -arch=compute_10 --maxrregcount=64 --ptxas-options=-v -I./compat/jansson -o salsa_kernel.o -c salsa_kernel.cu /bin/bash: nvcc: command not found make[2]: *** [salsa_kernel.o] Error 127 make[2]: Leaving directory `/var/progs/CudaMiner' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/var/progs/CudaMiner' make: *** [all] Error 2 So, kind of interesting: nvcc: nvcc fatal : No input files specified; use option --help for more information Whereas sudo nvcc: sudo: nvcc: command not found Huh?? I have identical exports listed in ~/.bashrc AND /etc/bash.bashrc. (Nvcc is located in: /usr/local/cuda-5.0/bin/nvcc) I also tried changing the current path, to no avail: $ sudo bash -c 'echo $PATH' /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin $ PATH=$PATH:/usr/local/cuda-5.0/bin/nvcc $ sudo bash -c 'echo $PATH' /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Thanks in advance!

    Read the article

  • What happens to C# 4 optional parameters when compiling against 3.5?

    Heres a method declaration that uses optional parameters: public Path Copy( Path destination, bool overwrite = false, bool recursive = false) Something you may not know is that Visual Studio 2010 will let you compile this against .NET 3.5, with no error or warning. You may be wondering (as I was) how it does that. Well, it takes the easy and rather obvious way of not trying to be too smart and just ignores the optional parameters. So if youre compiling against 3.5 from Visual Studio 2010,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Formal Languages, Inductive Proofs &amp; Regular Expressions

    - by MarkPearl
    So I am slogging away at my UNISA stuff. I have just finished doing the initial once non stop read through the first 11 chapters of my COS 201 Textbook - “Introduction to Computer Theory 2nd Edition” by Daniel Cohen. It has been an interesting couple of days, with familiar concepts coming up as well as some new territory. In this posting I am going to cover the first couple of chapters of the book. Let start with Formal Languages… What exactly is a formal language? Pretty much a no duh question for me but still a good one to ask – a formal language is a language that is defined in a precise mathematical way. Does that mean that the English language is a formal language? I would say no – and my main motivation for this is that one can have an English sentence that is correct grammatically that is also ambiguous. For example the ambiguous sentence: "I once shot an elephant in my pyjamas.” For this and possibly many other reasons that I am unaware of, English is termed a “Natural Language”. So why the importance of formal languages in computer science? Again a no duh question in my mind… If we want computers to be effective and useful tools then we need them to be able to evaluate a series of commands in some form of language that when interpreted by the device no confusion will exist as to what we were requesting. Imagine the mayhem that would exist if a computer misinterpreted a command to print a document and instead decided to delete it. So what is a Formal Language made up of… For my study purposes a language is made up of a finite alphabet. For a formal language to exist there needs to be a specification on the language that will describe whether a string of characters has membership in the language or not. There are two basic ways to do this: By a “machine” that will recognize strings of the language (e.g. Finite Automata). By a rule that describes how strings of a language can be formed (e.g. Regular Expressions). When we use the phrase “string of characters”, we can also be referring to a “word”. What is an Inductive Proof? So I am not to far into my textbook and of course it starts referring to proofs and different types. I have had to go through several different approaches of proofs in the past, but I can never remember their formal names , so when I saw “inductive proof” I thought to myself – what the heck is that? Google to the rescue… An inductive proof is like a normal proof but it employs a neat trick which allows you to prove a statement about an arbitrary number n by first proving it is true when n is 1 and then assuming it is true for n=k and showing it is true for n=k+1. The idea is that if you want to show that someone can climb to the nth floor of a fire escape, you need only show that you can climb the ladder up to the fire escape (n=1) and then show that you know how to climb the stairs from any level of the fire escape (n=k) to the next level (n=k+1). Does this sound like a form of recursion? No surprise then that in the same chapter they deal with recursive definitions. An example of a recursive definition for the language EVEN would the 3 rules below: 2 is in EVEN If x is in EVEN then so is x+2 The only elements in the set EVEN are those that be produced by the rules above. Nothing to exciting… So if a definition for a language is done recursively, then it makes sense that the language can be proved using induction. Regular Expressions So I am wondering to myself what use is this all – in fact – I find this the biggest challenge to any university material is that it is quite hard to find the immediate practical applications of some theory in real life stuff. How great was my joy when I suddenly saw the word regular expression being introduced. I had been introduced to regular expressions on Stack Overflow where I was trying to recognize if some text measurement put in by a user was in a valid form or not. For instance, the imperial system of measurement where you have feet and inches can be represented in so many different ways. I had eventually turned to regular expressions as an easy way to check if my parser could correctly parse the text or not and convert it to a normalize measurement. So some rules about languages and regular expressions… Any finite language can be represented by at least one if not more regular expressions A regular expressions is almost a rule syntax for expressing how regular languages can be formed regular expressions are cool For a regular expression to be valid for a language it must be able to generate all the words in the language and no other words. This is important. It doesn’t help me if my regular expression parses 100% of my measurement texts but also lets one or two invalid texts to pass as well. Okay, so this posting jumps around a bit – but introduces some very basic fundamentals for the subject which will be built on in later postings… Time to go and do some practical examples now…

    Read the article

  • Are there advantages for using recursion over iteration - other than sometimes readability and elegance?

    - by Prog
    I am about to make two assumptions. Please correct me if they're wrong: There isn't a recursive algorithm without an iterative equivalent. Iteration is always cheaper performance-wise than recursion (at least in general purpose languages such as Java, C++, Python etc.). If it's true that recursion is always more costly than iteration, and that it can always be replaced with an iterative algorithm (in languages that allow it) - than I think that the two remaining reasons to use recursion are: elegance and readability. Some algorithms are expressed more elegantly with recursion. E.g. scanning a binary tree. However apart from that, are there any reasons to use recursion over iteration? Does recursion have advantages over iteration other than sometimes elegance and readability?

    Read the article

  • How do I install Dan's Guardian on 12.04?

    - by Matt
    I'm trying to install Dans Guardian on a virtual machine. The instructions ask me to run the ./configure script and then execute the command make install. The configure script runs fine but the make install throws errors. Making all in src make[2]: Entering directory `/webmin/dansguardian-2.10/src' g++ -DHAVE_CONFIG_H -I. -I.. -D__CONFFILE='"/usr/local/etc/dansguardian/dansguardian.conf"' -D__LOGLOCATION='"/usr/local/var/log/dansguardian/"' -D__PIDDIR='"/usr/local/var/run"' -D__PROXYUSER='"nobody"' -D__PROXYGROUP='"nobody"' -D__CONFDIR='"/usr/local/etc/dansguardian"' -g -O2 -MT dansguardian-fancy.o -MD -MP -MF .deps/dansguardian-fancy.Tpo -c -o dansguardian-fancy.o `test -f 'downloadmanagers/fancy.cpp' || echo './'`downloadmanagers/fancy.cpp downloadmanagers/fancy.cpp: In member function âstd::string fancydm::timestring(int)â: downloadmanagers/fancy.cpp:507:72: error: âsnprintfâ was not declared in this scope make[2]: *** [dansguardian-fancy.o] Error 1 make[2]: Leaving directory `/webmin/dansguardian-2.10/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/webmin/dansguardian-2.10' make: *** [all] Error 2 I'm running 12.04 LTS server x64

    Read the article

  • Master Data Services Employees Sample Model

    - by Davide Mauri
    I’ve been playing with Master Data Services quite a lot in those last days and I’m also monitoring the web for all available resources on it. Today I’ve found this freshly released sample available on MSDN Code Gallery: SQL Server Master Data Services Employee Sample Model http://code.msdn.microsoft.com/SSMDSEmployeeSample This sample shows how Recursive Hierarchies can be modeled in order to represent a typical organizational chart scenario where a self-relationship exists on the Employee entity. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Killing Stuck Child JVM's

    - by ACShorten
    Note: This facility only applies to Oracle Utilities Application Framework products using COBOL. In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be displayed and recursive child JVM's to be initiated and shunned. If the following: Unable to establish connection on port …. after waiting .. seconds.The issue can be caused intermittently by CPU spins in connection to the creation of new processes, specifically Child JVMs. Recursive (or double) invocation of the System.exit call in the remote JVM may be caused by a Process.destroy call that the parent JVM always issues when shunning a JVM. The issue may happen when the thread in the parent JVM that is responsible for the recycling gets stuck and it affects all child JVMs. If this issue occurs at your site then there are a number of options to address the issue: Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck. Configure a Process.destroy command to be used if the kill command is not configured or desired. Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands. Note: This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit. The following additional settings must be added to the spl.properties for the Business Application Server to use this facility: spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can be a command or specify a script to execute to provide additional information. The kill.command property can accept two arguments, {pid} and {jvmNumber}, in the specified string. The arguments must be enclosed in curly braces as shown here. Note: The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are specified. The jvmNumber can be useful if passed to a script for logging purposes. Note: If a script is used it must be in the path and be executable by the OS user running the system. spl.runtime.cobol.remote.destroy.enabled – Specify whether to use the Process.destroy command instead of the kill command. Specify true or false. Default value is false. Note: Unless otherwise required, it is recommended to use the kill command option if shunning JVM's is an issue. There this value can remain its default value, false, unless otherwise required. spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds. For example: spl.runtime.cobol.remote.kill.command=kill -9 {pid} {jvmNumber}spl.runtime.cobol.remote.destroy.enabled=falsespl.runtime.cobol.remote.kill.delaysecs=10 When a Child JVM is to be recycled, these properties are inspected and the spl.runtime.cobol.remote.kill.command, executed if provided. This is done after waiting for spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the spl.runtime.cobol.remote.kill.command omitted for the original Process.destroy command to be used on the process. Note: By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore disabled. If neither spl.runtime.cobol.remote.kill.command nor spl.runtime.cobol.remote.destroy.enabled is specified, child JVMs will not beforcibly killed. They will be left to shut themselves down (which may lead to orphan JVMs). If both are specified, the spl.runtime.cobol.remote.kill.command is preferred and spl.runtime.cobol.remote.destroy.enabled defaulted to false.It is recommended to invoke a script to issue the direct kill command instead of directly using the kill -9 commands.For example, the following sample script ensures that the process Id is an active cobjrun process before issuing the kill command: forcequit.sh #!/bin/shTHETIME=`date +"%Y-%m-%d %H:%M:%S"`if [ "$1" = "" ]then  echo "$THETIME: Process Id is required" >>$SPLSYSTEMLOGS/forcequit.log  exit 1fijavaexec=cobjrunps e $1 | grep -c $javaexecif [ $? = 0 ]then  echo "$THETIME: Process $1 is an active $javaexec process -- issuing kill-9 $1" >>$SPLSYSTEMLOGS/forcequit.log  kill -9 $1exit 0else  echo "$THETIME: Process id $1 is not a $javaexec process or not active --  kill will not be issued" >>$SPLSYSTEMLOGS/forcequit.logexit 1fi This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command property, for example: spl.runtime.cobol.remote.kill.command=forcequit.sh The forcequit script does not have any explicit parameters but pid is passed automatically. To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script forcequit.sh and pass it the pid and the child JVM number, specify it as follows: spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber} The script can then use the JVM number for logging purposes or to further ensure that the correct pid is being killed.If the arguments are omitted, the pid is automatically appended to the spl.runtime.cobol.remote.kill.command string. To use this facility the following patches must be installed: Patch 13719584 for Oracle Utilities Application Framework V2.1, Patches 13684595 and 13634933 for Oracle Utilities Application Framework V2.2 Group Fix 4 (as Patch 13640668) for Oracle Utilities Application Framework V4.1.

    Read the article

  • Failed to compile Network Manager 0.9.4

    - by Oleksa
    After upgrading to 12.04 I needed to re-compile Network Manager to the version 0.9.4.0 again. However with the version 9.4.0 I faced with the error during compilation with libdns-manager: $ make ... Making all in dns-manager make[4]: ????? ? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src/dns-manager" /bin/bash ../../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-manager.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-manager.Tpo -c -o libdns_manager_la-nm-dns-manager.lo `test -f 'nm-dns-manager.c' || echo './'`nm-dns-manager.c libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-manager.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-manager.Tpo -c nm-dns-manager.c -fPIC -DPIC -o .libs/libdns_manager_la-nm-dns-manager.o mv -f .deps/libdns_manager_la-nm-dns-manager.Tpo .deps/libdns_manager_la-nm-dns-manager.Plo /bin/bash ../../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-dnsmasq.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-dnsmasq.Tpo -c -o libdns_manager_la-nm-dns-dnsmasq.lo `test -f 'nm-dns-dnsmasq.c' || echo './'`nm-dns-dnsmasq.c libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-dnsmasq.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-dnsmasq.Tpo -c nm-dns-dnsmasq.c -fPIC -DPIC -o .libs/libdns_manager_la-nm-dns-dnsmasq.o nm-dns-dnsmasq.c: In function 'update': nm-dns-dnsmasq.c:274:2: error: passing argument 1 of 'g_slist_copy' discards 'const' qualifier from pointer target type [-Werror] /usr/include/glib-2.0/glib/gslist.h:82:10: note: expected 'struct GSList *' but argument is of type 'const struct GSList *' cc1: all warnings being treated as errors make[4]: *** [libdns_manager_la-nm-dns-dnsmasq.lo] ??????? 1 make[4]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src/dns-manager" make[3]: *** [all-recursive] ??????? 1 make[3]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src" make[2]: *** [all] ??????? 2 make[2]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src" make[1]: *** [all-recursive] ??????? 1 make[1]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0" make: *** [all] ??????? 2 Has anybody faced with the similar errors? Thank you in advance for your help.

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • Recursion VS memory allocation

    - by Vladimir Kishlaly
    Which approach is most popular in real-world examples: recursion or iteration? For example, simple tree preorder traversal with recursion: void preorderTraversal( Node root ){ if( root == null ) return; root.printValue(); preorderTraversal( root.getLeft() ); preorderTraversal( root.getRight() ); } and with iteration (using stack): Push the root node on the stack While the stack is not empty Pop a node Print its value If right child exists, push the node's right child If left child exists, push the node's left child In the first example we have recursive method calls, but in the second - new ancillary data structure. Complexity is similar in both cases - O(n). So, the main question is memory footprint requirement?

    Read the article

  • In C what is the difference between null and a new line character? Guys help please [migrated]

    - by Siddhartha Gurjala
    Whats the conceptual difference and similarity between NULL and a newline character i.e between '\0' and '\n' Explain their relevance for both integer and character data type variables and arrays? For reference here is an example snippets of a program to read and write a 2d char array PROGRAM CODE 1: int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); for(j=0;j<256;j++) { scanf("%c",&name[i][j]); if(name[i][j]=='\n') { name[i][j]='\0'; j=257; } } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } The above code is working good where as the same logic given with slight diff is not giving appropriate output. Here's the code PROGRAM CODE 2: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\0';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Here one more instance of same program not giving proper output given below PROGRAM CODE 3: #include<stdio.h> int main() { char sort(),stuname(),swap(),(*p)(),(*q)(); int n; p=stuname; q=swap; printf("Let the number of students in the class be \n"); scanf("%d",&n); fflush(stdin); sort(p,q,n); return 0; } char sort(p1,q1,n1) char (*p1)(),(*q1)(); int n1; { (*p1)(n1); (*q1)(); } char stuname(int nos) // number of students { char name[nos][256]; int i,j; printf("Reading names of %d students started--->\n\n",nos); name[0][0]='k'; //initialising as non NULL charecter for(i=0;i<nos;i++) // nos=number of students { printf("Give name of student %d\n",i); ***for(j=0;j<256&&name[i][j]!='\n';j++)*** { scanf("%c",&name[i][j]); /*if(name[i][j]=='\n') { name[i][j]='\0'; j=257; }*/ } name[i][i]='\0'; } printf("\n\nWriting student names:\n\n"); for(i=0;i<nos;i++) { for(j=0;j<256&&name[i][j]!='\0';j++) { printf("%c",name[i][j]); } printf("\n"); } } char swap() { printf("Will swap shortly after getting clarity on scanf and %c"); } Why the program code 2 and program code 3 are not working as expected as that of the code 1?

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • Live from ODTUG - Big Data and SQL session #2

    - by Jean-Pierre Dijcks
    Sitting in Dominic Delmolino's session at ODTUG (KScope 12). If the session count at conferences is any indication then we will see more and more people start to deploy MapReduce in the database. And yes, that would be with SQL and PL/SQL first and foremost. Both Dominic and our own Bryn Llewellyn are doing MapReduce in the database presentations.  Since I have seen both, I would advice people to first look through Dominic's session to get a good grasp on what mappers do and what reducers do, then dive into Bryn's for a bunch of PL/SQL example. The thing I like about Dominic's is the last slide (a recursive WITH statement) to do this in SQL... Now I am hoping that next year we will see tools vendors show off how they work with Hadoop and MapReduce (at least talking about the concepts!!).

    Read the article

  • why installing lame it is getting failed

    - by Rahul Mehta
    I want to install ffmpeg with mp3lame enabled for this m following this tutorial , http://ubuntuforums.org/showpost.php?p=9868359&postcount=1289 and in step 2 error is libfaac is not found ? and in step 5 installing lame is giving this error , why it is getting failed , please advised what to do ? reach121@youngib:~/lame-3.98.4$ sudo checkinstall --pkgname=lame-ffmpeg --pkgversion="3.98.4" --backup=no --default --deldoc=yes checkinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran This software is released under the GNU GPL. ***************************************** **** Debian package creation selected *** ***************************************** This package will be built according to these values: 0 - Maintainer: [ root@youngib ] 1 - Summary: [ Package created with checkinstall 1.6.2 ] 2 - Name: [ lame-ffmpeg ] 3 - Version: [ 3.98.4 ] 4 - Release: [ 1 ] 5 - License: [ GPL ] 6 - Group: [ checkinstall ] 7 - Architecture: [ amd64 ] 8 - Source location: [ lame-3.98.4 ] 9 - Alternate source location: [ ] 10 - Requires: [ ] 11 - Provides: [ lame-ffmpeg ] 12 - Conflicts: [ ] 13 - Replaces: [ ] Enter a number to change any of them or press ENTER to continue: Installing with make install... ========================= Installation results =========================== Making install in mpglib make[1]: Entering directory `/home/reach121/lame-3.98.4/mpglib' make[2]: Entering directory `/home/reach121/lame-3.98.4/mpglib' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/mpglib' make[1]: Leaving directory `/home/reach121/lame-3.98.4/mpglib' Making install in libmp3lame make[1]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' Making install in i386 make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/i386' Making install in vector make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' test -z "/usr/local/lib" || /bin/mkdir -p "/usr/local/lib" /bin/bash ../libtool --mode=install /usr/bin/install -c 'libmp3lame.la' '/usr/local/lib/libmp3lame.la' /usr/bin/install -c .libs/libmp3lame.lai /usr/local/lib/libmp3lame.la /usr/bin/install -c .libs/libmp3lame.a /usr/local/lib/libmp3lame.a chmod 644 /usr/local/lib/libmp3lame.a ranlib /usr/local/lib/libmp3lame.a PATH="$PATH:/sbin" ldconfig -n /usr/local/lib ---------------------------------------------------------------------- Libraries have been installed in: /usr/local/lib If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,--rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' make[1]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' Making install in frontend make[1]: Entering directory `/home/reach121/lame-3.98.4/frontend' make[2]: Entering directory `/home/reach121/lame-3.98.4/frontend' test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin" /bin/bash ../libtool --mode=install /usr/bin/install -c 'lame' '/usr/local/bin/lame' /usr/bin/install -c lame /usr/local/bin/lame make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/frontend' make[1]: Leaving directory `/home/reach121/lame-3.98.4/frontend' Making install in Dll make[1]: Entering directory `/home/reach121/lame-3.98.4/Dll' make[2]: Entering directory `/home/reach121/lame-3.98.4/Dll' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/Dll' make[1]: Leaving directory `/home/reach121/lame-3.98.4/Dll' Making install in debian make[1]: Entering directory `/home/reach121/lame-3.98.4/debian' make[2]: Entering directory `/home/reach121/lame-3.98.4/debian' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/debian' make[1]: Leaving directory `/home/reach121/lame-3.98.4/debian' Making install in doc make[1]: Entering directory `/home/reach121/lame-3.98.4/doc' Making install in html make[2]: Entering directory `/home/reach121/lame-3.98.4/doc/html' make[3]: Entering directory `/home/reach121/lame-3.98.4/doc/html' make[3]: Nothing to be done for `install-exec-am'. test -z "/usr/local/share/doc/lame/html" || /bin/mkdir -p "/usr/local/share/doc/lame/html" /bin/mkdir: cannot create directory `/usr/local/share/doc': No such file or directory make[3]: *** [install-pkghtmlDATA] Error 1 make[3]: Leaving directory `/home/reach121/lame-3.98.4/doc/html' make[2]: *** [install-am] Error 2 make[2]: Leaving directory `/home/reach121/lame-3.98.4/doc/html' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/home/reach121/lame-3.98.4/doc' make: *** [install-recursive] Error 1 **** Installation failed. Aborting package creation. Cleaning up...OK Bye. reach121@youngib:~/lame-3.98.4$

    Read the article

  • Theoretically bug-free programs

    - by user2443423
    I have read lot of articles which state that code can't be bug-free, and they are talking about these theorems: Halting problem Gödel's incompleteness theorem Rice's theorem Actually Rice's theorem looks like an implication of the halting problem and the halting problem is in close relationship with Gödel's incompleteness theorem. Does this imply that every program will have at least one unintended behavior? Or does it mean that it's not possible to write code to verify it? What about recursive checking? Let's assume that I have two programs. Both of them have bugs, but they don't share the same bug. What will happen if I run them concurrently? And of course most of discussions talked about Turing machines. What about linear-bounded automation (real computers)?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >