Search Results

Search found 16902 results on 677 pages for 'strange errors'.

Page 467/677 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • How to set ID of item when the item is not yet created..?

    - by hoppl
    I'll try to make myself clear with an example: Imagine an admin page to add a product to a database (product list). When I open the ADD ITEM page, the item is not yet created (not until I click the submit button), but let's say I want to add categories this product will appear in (with AJAX for example). When I run the AJAX script, i need to tell it which ID (my product) to put these categories in... How should I do this.? Is inserting a blank item in the database (to get mysql_insert_id) when the page opens a good way to do this.? Is it prone to conflicts or errors..? How do you guys do it.?

    Read the article

  • .net File.Copy very slow when copying many small files (not over network)

    - by Guavaman
    I'm making a simple folder sync backup tool for myself and ran into quite a roadblock using File.Copy. Doing tests copying a folder of ~44,000 small files (Windows mail folders) to another drive in my system, I found that using File.Copy was over 3x slower than using a command line and running xcopy to copy the same files/folders. My C# version takes over 16+ minutes to copy the files, whereas xcopy takes only 5 minutes. I've tried searching for help on this topic, but all I find is people complaining about slow file copying of large files over a network. This is neither a large file problem nor a network copying problem. I found an interesting article about a better File.Copy replacement, but the code as posted has some errors which causes problems with the stack and I am nowhere near knowledgeable enough to fix the problems in his code. Are there any common or easy ways to replace File.Copy with something more speedy?

    Read the article

  • mysqldb python escaping ? or %s?

    - by asldkncvas
    Dear Everyone, I am currently using mysqldb. What is the correct way to escape strings in mysqldb arguments? Note that E = lambda x: x.encode('utf-8') 1) so my connection is set with charset='utf8'. These are the errors I am getting for these arguments: w1, w2 = u'??', u'??' 1) self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2))) ret = self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)) ) File "build/bdist.linux-i686/egg/MySQLdb/cursors.py", line 158, in execute TypeError: not all arguments converted during string formatting 2) self.cur.execute("SELECT dist FROM distance WHERE w1=%s AND w2=%s", (E(w1), E(w2))) This works fine, but when w1 or w2 has \ inside, then the escaping obviously failed. I personally know that %s is not a good method to pass in arguemnts due to injection attacks etc.

    Read the article

  • LIKE and % Wildcard in Doctrine's findBy*()

    - by 01010011
    Hi, How do I write the following MySQL query using Doctrine's findBy*() method?: SELECT column_name1, column_name2 FROM table_name WHERE column_name3 LIKE '%search_key%'; For Example, to fetch multiple rows from a column named "ColumnName" (below) using Doctrine: $users = Doctrine::getTable('User')->findByColumnName('active'); echo $users[0]->username; echo $users[1]->username; I tried: $search_key = 'some value'; $users = Doctrine::getTable('User')->findByColumnName('%$search_key%'); echo $users[0]->username; echo $users[1]->username; and I got no errors, but nothing displayed. Any assistance will be really appreciated. Thanks in advance.

    Read the article

  • Running lmgrd on ubuntu 14.04 LTS

    - by SumanBhatR
    I have installed Xilinx 14.7 in ubuntu 14.04 LTS machine(i386 - 64bit). But I am unable to run lmgrd (for starting the license server). When I googled this problem, I found that lsb-core package needs to be installed. But the package is having many dependencies, I want to know how to install lsb-core package with all the necessary dependencies. Thanks for the help On running sudo apt-get install lsb-core I got the following output Reading package lists... Done Building dependency tree Reading state information... Done Package lsb-core is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'lsb-core' has no installation candidate So I downloaded lsb-core package from http://packages.ubuntu.com/trusty/misc/lsb-core site and used "sudo dpkg -i ./lsb-core_4.1+Debian11ubuntu6_i386.deb" to install it By doing it, I got the following output Selecting previously unselected package lsb-core. (Reading database ... 163205 files and directories currently installed.) Preparing to unpack .../lsb-core_4.1+Debian11ubuntu6_i386.deb ... Unpacking lsb-core (4.1+Debian11ubuntu6) ... dpkg: dependency problems prevent configuration of lsb-core: lsb-core depends on libc6 ( 2.3.5). lsb-core depends on libz1. lsb-core depends on libncurses5. lsb-core depends on libpam0g. lsb-core depends on lsb-invalid-mta (= 4.1+Debian11ubuntu6) | mail-transport-agent. lsb-core depends on at. lsb-core depends on binutils. lsb-core depends on cron | cron-daemon. lsb-core depends on libc6-dev | libc-dev. lsb-core depends on locales. lsb-core depends on m4. lsb-core depends on mailx | mailutils. lsb-core depends on ncurses-term. lsb-core depends on pax. lsb-core depends on psmisc. lsb-core depends on alien (= 8.36). lsb-core depends on python3. lsb-core depends on lsb-security (= 4.1+Debian11ubuntu6). lsb-core depends on time. dpkg: error processing package lsb-core (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: lsb-core So I want to know how to install lsb-core package with all the necessary dependencies in one go. Thanks for the help

    Read the article

  • VS 2010 JavaScript editor – matching braces highlighting – is it so difficult to implement?

    - by AGS777
    I do not know. Just curious. But first things first. As a web developer I spend about 80% of my work-time editing JavaScript code. And since my server-side platform is .NET then it would be very convenient to have decent JavaScript text editor within Visual Studio IDE. So, Visual Studio 2010 is out. Downloaded and installed. What were my expectations regarding JavaScript editor? Pretty low, actually.  I just wanted to have matching braces highlighted eventually. That’s all. Yes, I know about Ctrl + ] shortcut but it is not event remotely close to convenience. And the result? Alas. Without further ado, just look at some real-world fragment of code from jQuery Templates Proposal experimental plugin as I see it in Notepad++, Notepad2 and Visual Studio 2010 editors respectively: Notepad++ Notepad2 Visual Studio 2010 Look at the highlighted parentheses, regular expression literals, numbers. Do you have a feeling that the last screenshot is not very informative in comparison with the other ones? If yes, then my question is why? Instead I was given an IntelliSense. Sorry, but I do not need it to rot my mind. Especially the one which does not always work properly (try to use it with base2 library for example). With all the expressive power of the language I have to know what I am doing. Instead I still have the same plain old Notepad with some of the JavaScript keywords colorized, plus partially functional/useful IntelliSense. What I do need, is just a little help to make less errors when I type the code – some essential text editor facilities that I really need. Give me that and only then feel free to improve on something else. Maybe I am wrong. Then, sorry. Just cannot believe that I have to wait for another couple of years to get very basic code editor feature.  

    Read the article

  • Using PHP OCI8 with 32-bit PHP on Windows 64-bit

    - by christopher.jones
    The world migration from 32-bit to 64-bit operating systems is gaining pace. However I've seen a couple of customers having difficulty with the PHP OCI8 extension and Oracle DB on Windows 64-bit platforms. The errors vary depending how PHP is run. They may appear in the Apache or PHP log: Unable to load dynamic library 'C:\Program Files (x86)\PHP\ext\php_oci8_11g.dll' - %1 is not a valid Win32 application. or Warning oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that PATH includes the directory with Oracle Instant Client libraries Other than IIS permission issues a common cause seems to be trying to use PHP with libraries from an Oracle 64-bit database on the same machine. There is currently no 64-bit version of PHP on http://php.net/ so there is a library mismatch. A solution is to install Oracle Instant Client 32-bit and make sure that PHP uses these libraries, while not interferring with the 64-bit database on the same machine. Warning: The following hacky steps come untested from a Linux user: Unzip Oracle Instant Client 32-bit and move it to C:\WINDOWS\SYSWOW64\INSTANTCLIENT_11_2. You may need to do this in a console with elevated permissions. Edit your PATH environment variable and insert C:\WINDOWS\SYSTEM32\INSTANTCLIENT_11_2 in the directory list before the entry for the Oracle Home library. Windows makes it so all 32-bit applications that reference C:\WINDOWS\SYSTEM32 actually see the contents of the C:\WINDOWS\SYSWOW64 directory. Your 64-bit database won't find an Instant Client in the real, physical C:\WINDOWS\SYSTEM32 directory and will continue to use the database libraries. Some of our Windows team are concerned about this hack and prefer a more "correct" solution that (i) doesn't require changing the Windows system directory (ii) doesn't add to the "memory" burden about what was configured on the system (iii) works when there are multiple database versions installed. The solution is to write a script which will set the 64-bit (or 32-bit) Oracle libraries in the path as needed before invoking the relevant bit-ness application. This does have a weakness when the application is started as a service. As a footnote: If you don't have a local database and simply need to have 32-bit and 64-bit Instant Client accessible at the same time, try the "symbolic" link approach covered in the hack in this OTN forum thread. Reminder warning: This blog post came untested from a Linux user.

    Read the article

  • Rendering ASP.NET MVC Views to String

    - by Rick Strahl
    It's not uncommon in my applications that I require longish text output that does not have to be rendered into the HTTP output stream. The most common scenario I have for 'template driven' non-Web text is for emails of all sorts. Logon confirmations and verifications, email confirmations for things like orders, status updates or scheduler notifications - all of which require merged text output both within and sometimes outside of Web applications. On other occasions I also need to capture the output from certain views for logging purposes. Rather than creating text output in code, it's much nicer to use the rendering mechanism that ASP.NET MVC already provides by way of it's ViewEngines - using Razor or WebForms views - to render output to a string. This is nice because it uses the same familiar rendering mechanism that I already use for my HTTP output and it also solves the problem of where to store the templates for rendering this content in nothing more than perhaps a separate view folder. The good news is that ASP.NET MVC's rendering engine is much more modular than the full ASP.NET runtime engine which was a real pain in the butt to coerce into rendering output to string. With MVC the rendering engine has been separated out from core ASP.NET runtime, so it's actually a lot easier to get View output into a string. Getting View Output from within an MVC Application If you need to generate string output from an MVC and pass some model data to it, the process to capture this output is fairly straight forward and involves only a handful of lines of code. The catch is that this particular approach requires that you have an active ControllerContext that can be passed to the view. This means that the following approach is limited to access from within Controller methods. Here's a class that wraps the process and provides both instance and static methods to handle the rendering:/// <summary> /// Class that renders MVC views to a string using the /// standard MVC View Engine to render the view. /// /// Note: This class can only be used within MVC /// applications that have an active ControllerContext. /// </summary> public class ViewRenderer { /// <summary> /// Required Controller Context /// </summary> protected ControllerContext Context { get; set; } public ViewRenderer(ControllerContext controllerContext) { Context = controllerContext; } /// <summary> /// Renders a full MVC view to a string. Will render with the full MVC /// View engine including running _ViewStart and merging into _Layout /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to render the view with</param> /// <returns>String of the rendered view or null on error</returns> public string RenderView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, false); } /// <summary> /// Renders a partial MVC view to string. Use this method to render /// a partial view that doesn't merge with _Layout and doesn't fire /// _ViewStart. /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to pass to the viewRenderer</param> /// <returns>String of the rendered view or null on error</returns> public string RenderPartialView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, true); } public static string RenderView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderView(viewPath, model); } public static string RenderPartialView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderPartialView(viewPath, model); } protected string RenderViewToStringInternal(string viewPath, object model, bool partial = false) { // first find the ViewEngine for this view ViewEngineResult viewEngineResult = null; if (partial) viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath); else viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null); if (viewEngineResult == null) throw new FileNotFoundException(Properties.Resources.ViewCouldNotBeFound); // get the view and attach the model to view data var view = viewEngineResult.View; Context.Controller.ViewData.Model = model; string result = null; using (var sw = new StringWriter()) { var ctx = new ViewContext(Context, view, Context.Controller.ViewData, Context.Controller.TempData, sw); view.Render(ctx, sw); result = sw.ToString(); } return result; } } The key is the RenderViewToStringInternal method. The method first tries to find the view to render based on its path which can either be in the current controller's view path or the shared view path using its simple name (PasswordRecovery) or alternately by its full virtual path (~/Views/Templates/PasswordRecovery.cshtml). This code should work both for Razor and WebForms views although I've only tried it with Razor Views. Note that WebForms Views might actually be better for plain text as Razor adds all sorts of white space into its output when there are code blocks in the template. The Web Forms engine provides more accurate rendering for raw text scenarios. Once a view engine is found the view to render can be retrieved. Views in MVC render based on data that comes off the controller like the ViewData which contains the model along with the actual ViewData and ViewBag. From the View and some of the Context data a ViewContext is created which is then used to render the view with. The View picks up the Model and other data from the ViewContext internally and processes the View the same it would be processed if it were to send its output into the HTTP output stream. The difference is that we can override the ViewContext's output stream which we provide and capture into a StringWriter(). After rendering completes the result holds the output string. If an error occurs the error behavior is similar what you see with regular MVC errors - you get a full yellow screen of death including the view error information with the line of error highlighted. It's your responsibility to handle the error - or let it bubble up to your regular Controller Error filter if you have one. To use the simple class you only need a single line of code if you call the static methods. Here's an example of some Controller code that is used to send a user notification to a customer via email in one of my applications:[HttpPost] public ActionResult ContactSeller(ContactSellerViewModel model) { InitializeViewModel(model); var entryBus = new busEntry(); var entry = entryBus.LoadByDisplayId(model.EntryId); if ( string.IsNullOrEmpty(model.Email) ) entryBus.ValidationErrors.Add("Email address can't be empty.","Email"); if ( string.IsNullOrEmpty(model.Message)) entryBus.ValidationErrors.Add("Message can't be empty.","Message"); model.EntryId = entry.DisplayId; model.EntryTitle = entry.Title; if (entryBus.ValidationErrors.Count > 0) { ErrorDisplay.AddMessages(entryBus.ValidationErrors); ErrorDisplay.ShowError("Please correct the following:"); } else { string message = ViewRenderer.RenderView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); string title = entry.Title + " (" + entry.DisplayId + ") - " + App.Configuration.ApplicationName; AppUtils.SendEmail(title, message, model.Email, entry.User.Email, false, false)) } return View(model); } Simple! The view in this case is just a plain MVC view and in this case it's a very simple plain text email message (edited for brevity here) that is created and sent off:@model ContactSellerViewModel @{ Layout = null; }re: @Model.EntryTitle @Model.ListingUrl @Model.Message ** SECURITY ADVISORY - AVOID SCAMS ** Avoid: wiring money, cross-border deals, work-at-home ** Beware: cashier checks, money orders, escrow, shipping ** More Info: @(App.Configuration.ApplicationBaseUrl)scams.html Obviously this is a very simple view (I edited out more from this page to keep it brief) -  but other template views are much more complex HTML documents or long messages that are occasionally updated and they are a perfect fit for Razor rendering. It even works with nested partial views and _layout pages. Partial Rendering Notice that I'm rendering a full View here. In the view I explicitly set the Layout=null to avoid pulling in _layout.cshtml for this view. This can also be controlled externally by calling the RenderPartial method instead: string message = ViewRenderer.RenderPartialView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); with this line of code no layout page (or _viewstart) will be loaded, so the output generated is just what's in the view. I find myself using Partials most of the time when rendering templates, since the target of templates usually tend to be emails or other HTML fragment like output, so the RenderPartialView() method is definitely useful to me. Rendering without a ControllerContext The preceding class is great when you're need template rendering from within MVC controller actions or anywhere where you have access to the request Controller. But if you don't have a controller context handy - maybe inside a utility function that is static, a non-Web application, or an operation that runs asynchronously in ASP.NET - which makes using the above code impossible. I haven't found a way to manually create a Controller context to provide the ViewContext() what it needs from outside of the MVC infrastructure. However, there are ways to accomplish this,  but they are a bit more complex. It's possible to host the RazorEngine on your own, which side steps all of the MVC framework and HTTP and just deals with the raw rendering engine. I wrote about this process in Hosting the Razor Engine in Non-Web Applications a long while back. It's quite a process to create a custom Razor engine and runtime, but it allows for all sorts of flexibility. There's also a RazorEngine CodePlex project that does something similar. I've been meaning to check out the latter but haven't gotten around to it since I have my own code to do this. The trick to hosting the RazorEngine to have it behave properly inside of an ASP.NET application and properly cache content so templates aren't constantly rebuild and reparsed. Anyway, in the same app as above I have one scenario where no ControllerContext is available: I have a background scheduler running inside of the app that fires on timed intervals. This process could be external but because it's lightweight we decided to fire it right inside of the ASP.NET app on a separate thread. In my app the code that renders these templates does something like this:var model = new SearchNotificationViewModel() { Entries = entries, Notification = notification, User = user }; // TODO: Need logging for errors sending string razorError = null; var result = AppUtils.RenderRazorTemplate("~/views/template/SearchNotificationTemplate.cshtml", model, razorError); which references a couple of helper functions that set up my RazorFolderHostContainer class:public static string RenderRazorTemplate(string virtualPath, object model,string errorMessage = null) { var razor = AppUtils.CreateRazorHost(); var path = virtualPath.Replace("~/", "").Replace("~", "").Replace("/", "\\"); var merged = razor.RenderTemplateToString(path, model); if (merged == null) errorMessage = razor.ErrorMessage; return merged; } /// <summary> /// Creates a RazorStringHostContainer and starts it /// Call .Stop() when you're done with it. /// /// This is a static instance /// </summary> /// <param name="virtualPath"></param> /// <param name="binBasePath"></param> /// <param name="forceLoad"></param> /// <returns></returns> public static RazorFolderHostContainer CreateRazorHost(string binBasePath = null, bool forceLoad = false) { if (binBasePath == null) { if (HttpContext.Current != null) binBasePath = HttpContext.Current.Server.MapPath("~/"); else binBasePath = AppDomain.CurrentDomain.BaseDirectory; } if (_RazorHost == null || forceLoad) { if (!binBasePath.EndsWith("\\")) binBasePath += "\\"; //var razor = new RazorStringHostContainer(); var razor = new RazorFolderHostContainer(); razor.TemplatePath = binBasePath; binBasePath += "bin\\"; razor.BaseBinaryFolder = binBasePath; razor.UseAppDomain = false; razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsBusiness.dll"); razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsWeb.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Utilities.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.Mvc.dll"); razor.ReferencedAssemblies.Add("System.Web.dll"); razor.ReferencedNamespaces.Add("System.Web"); razor.ReferencedNamespaces.Add("ClassifiedsBusiness"); razor.ReferencedNamespaces.Add("ClassifiedsWeb"); razor.ReferencedNamespaces.Add("Westwind.Web"); razor.ReferencedNamespaces.Add("Westwind.Utilities"); _RazorHost = razor; _RazorHost.Start(); //_RazorHost.Engine.Configuration.CompileToMemory = false; } return _RazorHost; } The RazorFolderHostContainer essentially is a full runtime that mimics a folder structure like a typical Web app does including caching semantics and compiling code only if code changes on disk. It maps a folder hierarchy to views using the ~/ path syntax. The host is then configured to add assemblies and namespaces. Unfortunately the engine is not exactly like MVC's Razor - the expression expansion and code execution are the same, but some of the support methods like sections, helpers etc. are not all there so templates have to be a bit simpler. There are other folder hosts provided as well to directly execute templates from strings (using RazorStringHostContainer). The following is an example of an HTML email template @inherits RazorHosting.RazorTemplateFolderHost <ClassifiedsWeb.SearchNotificationViewModel> <html> <head> <title>Search Notifications</title> <style> body { margin: 5px;font-family: Verdana, Arial; font-size: 10pt;} h3 { color: SteelBlue; } .entry-item { border-bottom: 1px solid grey; padding: 8px; margin-bottom: 5px; } </style> </head> <body> Hello @Model.User.Name,<br /> <p>Below are your Search Results for the search phrase:</p> <h3>@Model.Notification.SearchPhrase</h3> <small>since @TimeUtils.ShortDateString(Model.Notification.LastSearch)</small> <hr /> You can see that the syntax is a little different. Instead of the familiar @model header the raw Razor  @inherits tag is used to specify the template base class (which you can extend). I took a quick look through the feature set of RazorEngine on CodePlex (now Github I guess) and the template implementation they use is closer to MVC's razor but there are other differences. In the end don't expect exact behavior like MVC templates if you use an external Razor rendering engine. This is not what I would consider an ideal solution, but it works well enough for this project. My biggest concern is the overhead of hosting a second razor engine in a Web app and the fact that here the differences in template rendering between 'real' MVC Razor views and another RazorEngine really are noticeable. You win some, you lose some It's extremely nice to see that if you have a ControllerContext handy (which probably addresses 99% of Web app scenarios) rendering a view to string using the native MVC Razor engine is pretty simple. Kudos on making that happen - as it solves a problem I see in just about every Web application I work on. But it is a bummer that a ControllerContext is required to make this simple code work. It'd be really sweet if there was a way to render views without being so closely coupled to the ASP.NET or MVC infrastructure that requires a ControllerContext. Alternately it'd be nice to have a way for an MVC based application to create a minimal ControllerContext from scratch - maybe somebody's been down that path. I tried for a few hours to come up with a way to make that work but gave up in the soup of nested contexts (MVC/Controller/View/Http). I suspect going down this path would be similar to hosting the ASP.NET runtime requiring a WorkerRequest. Brrr…. The sad part is that it seems to me that a View should really not require much 'context' of any kind to render output to string. Yes there are a few things that clearly are required like paths to the virtual and possibly the disk paths to the root of the app, but beyond that view rendering should not require much. But, no such luck. For now custom RazorHosting seems to be the only way to make Razor rendering go outside of the MVC context… Resources Full ViewRenderer.cs source code from Westwind.Web.Mvc library Hosting the Razor Engine for Non-Web Applications RazorEngine on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Virtualbox on Ubuntu 12.04 and 3.5 kernel

    - by kas
    I have installed the 3.5 kernel under Ubuntu 12.04. When I install virtualbox I recieve the following error. Setting up virtualbox (4.1.12-dfsg-2ubuntu0.2) ... * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Processing triggers for python-central ... Setting up virtualbox-dkms (4.1.12-dfsg-2ubuntu0.2) ... Loading new virtualbox-4.1.12 DKMS files... First Installation: checking all kernels... Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Does anyone know how I might be able to resolve this? Edit -- Here is the make.log DKMS make.log for virtualbox-4.1.12 for kernel 3.5.0-18-generic (x86_64) Mon Nov 19 12:12:23 EST 2012 make: Entering directory `/usr/src/linux-headers-3.5.0-18-generic' LD /var/lib/dkms/virtualbox/4.1.12/build/built-in.o LD /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/built-in.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/linux/SUPDrv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrvSem.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/alloc-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/initterm-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/memobj-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/mpnotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/powernotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/assert-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/alloc-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/initterm-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c: In function ‘rtR0MemObjLinuxDoMmap’: /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c:1150:9: error: implicit declaration of function ‘do_mmap’ [-Werror=implicit-function-declaration] cc1: some warnings being treated as errors make[2]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o] Error 1 make[1]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv] Error 2 make: *** [_module_/var/lib/dkms/virtualbox/4.1.12/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.5.0-18-generic'

    Read the article

  • How to Specify AssemblyKeyFile Attribute in .NET Assembly and Issues

    How to specify strong key file in assembly? Answer: You can specify snk file information using following line [assembly: AssemblyKeyFile(@"c:\Key2.snk")] Where to specify an strong key file (snk file)? Answer: You have two options to specify the AssemblyKeyFile infromation. 1. In class 2. In AssemblyInfo.cs [assembly: AssemblyKeyFile(@"c:\Key2.snk")] 1. In Class you must specify above line before defining namespace of the class and after all the imports or usings Example: See Line 7 in bellow sample class using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Reflection;[assembly: AssemblyKeyFile(@"c:\Key1.snk")]namespace Csharp3Part1{ class Person { public string GetName() { return "Smith"; } }}2. In AssemblyInfo.cs You can aslo specify assembly information in AssemblyInfo.cs Example: See Line 16 in bellow sample AssemblyInfo.csusing System.Reflection;using System.Runtime.CompilerServices;using System.Runtime.InteropServices;// General Information about an assembly is controlled through the following// set of attributes. Change these attribute values to modify the information// associated with an assembly.[assembly: AssemblyTitle("Csharp3Part1")][assembly: AssemblyDescription("")][assembly: AssemblyConfiguration("")][assembly: AssemblyCompany("Deloitte")][assembly: AssemblyProduct("Csharp3Part1")][assembly: AssemblyCopyright("Copyright © Deloitte 2009")][assembly: AssemblyTrademark("")][assembly: AssemblyCulture("")][assembly: AssemblyKeyFile(@"c:\Key1.snk")]// Setting ComVisible to false makes the types in this assembly not visible// to COM components. If you need to access a type in this assembly from// COM, set the ComVisible attribute to true on that type.[assembly: ComVisible(false)]// The following GUID is for the ID of the typelib if this project is exposed to COM[assembly: Guid("4350396f-1a5c-4598-a79f-2e1f219654f3")]// Version information for an assembly consists of the following four values://// Major Version// Minor Version// Build Number// Revision//// You can specify all the values or you can default the Build and Revision Numbers// by using the '*' as shown below:// [assembly: AssemblyVersion("1.0.*")][assembly: AssemblyVersion("1.0.0.0")][assembly: AssemblyFileVersion("1.0.0.0")]Issues:You should not sepcify this in following ways. 1. In multiple classes. 2. In both class and AssemblyInfo.cs If you did wrong in either one of the above ways, Visual Studio or C#/VB.NET compilers shows following Error Duplicate 'AssemblyKeyFile' attribute and warning Use command line option '/keyfile' or appropriate project settings instead of 'AssemblyKeyFile' To avoid this, Please specity your keyfile information only one time either only in one class or in AssemblyInfo.cs file. It is suggested to specify this at AssemblyInfo.cs file You might also encounter the errors like Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found. Solution. Please find herespan.fullpost {display:none;} span.fullpost {display:none;}

    Read the article

  • What’s New in The Second Edition of Regular Expressions Cookbook

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The second edition of Regular Expressions Cookbook is a completely revised edition, not just a minor update. All of the content from the first edition has been updated for the latest versions of the regular expression flavors and programming languages we discuss. We’ve corrected all errors that we could find and rewritten many sections that were either unclear or lacking in detail. And lack of detail was not something the first edition was accused of. Expect the second edition to really dot all i’s and cross all t’s. A few sections were removed. In particular, we removed much talk about browser inconsistencies as modern browsers are much more compatible with the official JavaScript standard. There is plenty of new content. The second edition has 101 more pages, bringing the total to 612. It’s almost 20% bigger than the first edition. We’ve added XRegExp as an additional regex flavor to all recipes throughout the book where XRegExp provides a better solution than standard JavaScript. We did keep the standard JavaScript solutions, so you can decide which is better for your needs. The new edition adds 21 recipes, bringing the total to 146. 14 of the new recipes are in the new Source Code and Log Files chapter. These recipes demonstrate techniques that are very useful for manipulating source code in a text editor and for dealing with log files using a grep tool. Chapter 3 which has recipes for programming with regular expressions gets only one new recipe, but it’s a doozy. If anyone has ever flamed you for using a regular expression instead of a parser, you’ll now be able to tell them how you can create your own parser by mixing regular expressions with procedural code. Combined with the recipes from the new Source Code and Log Files chapter, you can create parsers for whatever custom language or file format you like. If you have any interest in regular expressions at all, whether you’re a beginner or already consider yourself an expert, you definitely need a copy of the second edition of Regular Expressions Cookbook if you didn’t already buy the first. If you did buy the first edition, and you often find yourself referring back to it, then the second edition is a very worthwhile upgrade. You can buy the second edition of Regular Expressions Cookbook from Amazon or wherever technical books are sold. Ask for ISBN 1449319432.

    Read the article

  • SQL SERVER – Stored Procedure and Transactions

    - by pinaldave
    I just overheard the following statement – “I do not use Transactions in SQL as I use Stored Procedure“. I just realized that there are so many misconceptions about this subject. Transactions has nothing to do with Stored Procedures. Let me demonstrate that with a simple example. USE tempdb GO -- Create 3 Test Tables CREATE TABLE TABLE1 (ID INT); CREATE TABLE TABLE2 (ID INT); CREATE TABLE TABLE3 (ID INT); GO -- Create SP CREATE PROCEDURE TestSP AS INSERT INTO TABLE1 (ID) VALUES (1) INSERT INTO TABLE2 (ID) VALUES ('a') INSERT INTO TABLE3 (ID) VALUES (3) GO -- Execute SP -- SP will error out EXEC TestSP GO -- Check the Values in Table SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO Now, the main point is: If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Let’s see the result very quickly. It is very clear that there were entries in table1 which are not shown in the subsequent tables. If SP was transactional in terms of T-SQL Query Batches, there would be no entries in any of the tables. If you want to use Transactions with Stored Procedure, wrap the code around with BEGIN TRAN and COMMIT TRAN. The example is as following. CREATE PROCEDURE TestSPTran AS BEGIN TRAN INSERT INTO TABLE1 (ID) VALUES (11) INSERT INTO TABLE2 (ID) VALUES ('b') INSERT INTO TABLE3 (ID) VALUES (33) COMMIT GO -- Execute SP EXEC TestSPTran GO -- Check the Values in Tables SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO -- Clean up DROP TABLE Table1 DROP TABLE Table2 DROP TABLE Table3 GO In this case, there will be no entries in any part of the table. What is your opinion about this blog post? Please leave your comments about it here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Fix Error: Microsoft OLE DB Provider for SQL Server error ’80040e07' or Microsoft SQL Native Client error ’80040e07'

    - by pinaldave
    I quite often receive questions where users are looking for solution to following error: Microsoft OLE DB Provider for SQL Server error ’80040e07′ Syntax error converting datetime from character string. OR Microsoft SQL Native Client error ’80040e07′ Syntax error converting datetime from character string. If you have ever faced above error – I have a very simple solution for you. The solution is being very check date which is inserted in the datetime column. This error often comes up when application or user is attempting to enter an incorrect date into the datetime field. Here is one of the examples – one of the reader was using classing ASP Application with OLE DB provider for SQL Server. When he tried to insert following script he faced above mentioned error. INSERT INTO TestTable (ID, MyDate) VALUES (1, '01-Septeber-2013') The reason for the error was simple as he had misspelled September word. Upon correction of the word, he was able to successfully insert the value and error was not there. Incorrect values or the typo’s are not the only reason for this error. There can be issues with cast or convert as well. If you try to attempt following code using SQL Native Client or in your application you will also get similar errors. SELECT CONVERT (datetime, '01-Septeber-2013', 112) The reason here is very simple, any conversion attempt or any other kind of operation on incorrect date/time string can lead to the above error. If you not using embeded dynamic code in your application language but using attempting similar operation on incorrect datetime string you will get following error. Msg 241, Level 16, State 1, Line 2 Conversion failed when converting date and/or time from character string. Remember: Check your values of the string when you are attempting to convert them to string – either there can be incorrect values or they may be incorrectly formatted. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Kernel Error during upgrade or update commands

    - by Ashesh
    I am getting these errors during sudo apt-get update and upgrade... I tried all possible options no success. I recently upgraded to 13.04 and had problems with Broadcom WiFi. Fixed tat issues using the clean script... but looks like it did not install the Kernel properly.. Here is the o/p of the few scripts I ran: ashesh@ashesh-HPdv4:~$ sudo dpkg -r bcmwl-kernel-source (Reading database ... 175338 files and directories currently installed.) Removing bcmwl-kernel-source ... Removing all DKMS Modules Done. update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-25-generic cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/mtd/mtd.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/mtd/mtd.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/sfc/sfc.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/sfc/sfc.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/mellanox/mlx4/mlx4_core.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/mellanox/mlx4/mlx4_core.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/bnx2x/bnx2x.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/bnx2x/bnx2x.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/cnic.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/broadcom/cnic.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/qlogic/netxen/netxen_nic.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/qlogic/netxen/netxen_nic.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/brocade/bna/bna.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/net/ethernet/brocade/bna/bna.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/libfc/libfc.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/libfc/libfc.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/advansys.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/advansys.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/be2iscsi/be2iscsi.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/be2iscsi/be2iscsi.ko’: Input/output error cp: reading ‘/lib/modules/3.8.0-25-generic/kernel/drivers/scsi/bnx2i/bnx2i.ko’: Input/output error cp: failed to extend ‘/tmp/mkinitramfs_8gjKwQ//lib/modules/3.8.0-25-generic/kernel/drivers/scsi/bnx2i/bnx2i.ko’: Input/output error Bus error (core dumped) depmod: ../libkmod/libkmod-elf.c:207: elf_get_mem: Assertion `offset < elf->size' failed. Aborted (core dumped) I am not a techie but I need your support to resolve this without re-installation from scratch....

    Read the article

  • Invalid SSH key error in juju when using it with MAAS

    - by Captain T
    This is the output of juju from a clean install with 2 nodes all running 12.04 juju bootstrap - finishes with no errors and allocates the machine to the user but still no joy after juju environment-destroy and rebuild with different users and different nodes. root@cloudcontrol:/storage# juju -v status 2012-06-07 11:19:47,602 DEBUG Initializing juju status runtime 2012-06-07 11:19:47,621 INFO Connecting to environment... 2012-06-07 11:19:47,905 DEBUG Connecting to environment using node-386077143930... 2012-06-07 11:19:47,906 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="node-386077143930" remote_port="2181" local_port="57004". The authenticity of host 'node-386077143930 (10.5.5.113)' can't be established. ECDSA key fingerprint is 31:94:89:62:69:83:24:23:5f:02:70:53:93:54:b1:c5. Are you sure you want to continue connecting (yes/no)? yes 2012-06-07 11:19:52,102 ERROR Invalid SSH key 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@662: Client environment:host.name=cloudcontrol 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-23-generic 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@671: Client environment:os.version=#36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@679: Client environment:user.name=sysadmin 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@687: Client environment:user.home=/root 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@699: Client environment:user.dir=/storage 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:57004 sessionTimeout=10000 watcher=0x7feb11afc6b0 sessionId=0 sessionPasswd=<null> context=0x2dc7d20 flags=0 2012-06-07 11:19:52,429:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-06-07 11:19:55,765:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I have tried numerous ways of creating the keys with ssh-keygen -t rsa -b 2048, ssh-keygen -t rsa, ssh-keygen, and i have tried adding those to MAAS web config page, but always get the same result. I have added the appropriate public key afterwards to the ~/.ssh/authorized_keys I can also ssh to the node, but as I have not been asked to give it a user name or password or set up any sort of account, I cannot manually ssh into the node. The setup of the node is all handled by maas server. It seems like a simple error of looking at the wrong key or looking in the wrong places, only other suggestions I can find are to destroy the environment and rebuild (but that didn't work umpteen times now) or leave it to build the instance once the node has powered up, but I have left for a few hours, and left overnight to build with no luck.

    Read the article

  • Ubuntu Software Center starts, then crashes before fully loaded [closed]

    - by Nathan Weisser
    Possible Duplicate: Software center not opening I am brand new to Linux and Ubuntu, and I couldn't install GIMP without the software center. I looked up earlier how to fix it, and it said to fix my sources list, and I did, but now i get a new error in the terminal. 2012-08-14 15:29:08,941 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-08-14 15:29:08,954 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-08-14 15:29:09,407 - softwarecenter.ui.gtk3.app - INFO - building local database 2012-08-14 15:29:09,408 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() 2012-08-14 15:29:17,308 - softwarecenter.db.update - WARNING - Problem creating rebuild path '/var/cache/software-center/xapian_rb'. 2012-08-14 15:29:17,309 - softwarecenter.db.update - WARNING - Please check you have the relevant permissions. 2012-08-14 15:29:17,309 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-08-14 15:29:18,039 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-08-14 15:29:18,431 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-08-14 15:29:19,153 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/bin/software-center", line 176, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1422, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1352, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 154, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 136, in init_view SoftwarePane.init_view(self) File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/softwarepane.py", line 215, in init_view self.icons, self.show_ratings) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/appview.py", line 69, in __init__ self.helper = AppPropertiesHelper(db, cache, icons) File "/usr/share/software-center/softwarecenter/ui/gtk3/models/appstore2.py", line 109, in __init__ softwarecenter.paths.APP_INSTALL_PATH) File "/usr/share/software-center/softwarecenter/db/categories.py", line 255, in parse_applications_menu category = self._parse_menu_tag(child) File "/usr/share/software-center/softwarecenter/db/categories.py", line 444, in _parse_menu_tag query = self._parse_include_tag(element) File "/usr/share/software-center/softwarecenter/db/categories.py", line 402, in _parse_include_tag xapian.Query.OP_AND) File "/usr/share/software-center/softwarecenter/db/categories.py", line 341, in _parse_and_or_not_tag operator_elem, xapian.Query(), xapian.Query.OP_OR) File "/usr/share/software-center/softwarecenter/db/categories.py", line 385, in _parse_and_or_not_tag q = self.db.xapian_parser.parse_query(s, File "/usr/share/software-center/softwarecenter/db/database.py", line 174, in xapian_parser xapian_parser = self._get_new_xapian_parser() File "/usr/share/software-center/softwarecenter/db/database.py", line 200, in _get_new_xapian_parser xapian_parser.set_database(self.xapiandb) File "/usr/share/software-center/softwarecenter/db/database.py", line 166, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 179, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3666, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) xapian.DatabaseOpeningError: Couldn't detect type of database I'm not sure how to fix the errors, and I couldn't find a topic on them anywhere. Be nice, because I am a two-day old Linux user :/ Tell me if you need my Sources list

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • SQL SERVER – Error: Fix – Msg 208 – Invalid object name ‘dbo.backupset’ – Invalid object name ‘dbo.backupfile’

    - by pinaldave
    Just a day before I got a very interesting email. Here is the email (modified a bit to make it relevant to this blog post). “Pinal, We are facing a very strange issue. One of our query  related to backup files and backup set has stopped working suddenly in SSMS. It works fine in application where we have and in the stored procedure but when we have it in our SSMS it gives following error. Msg 208, Level 16, State 1, Line 1 Invalid object name ‘dbo.backupfile’. Here are our queries which we are trying to execute. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; This query gives us details related to backupset and backup files when the backup was taken.” When I receive this kind of email, usually I have no answers directly. The claim that it works in stored procedure and in application but not in SSMS gives me no real data. I have requested him to very first check following two things: If he is connected to correct server? His answer was yes. If he has enough permissions? His answer was he was logged in as an admin. This means there was something more to it and I requested him to send me a screenshot of the his SSMS. He promptly sends that to me and as soon as I receive the screen shot I knew what was going on. Before I say anything take a look at the screenshot yourself and see if you can figure out why his queries are not working in SSMS. Just to make your life a bit easy, I have already given a hint in the image. The answer is very simple, the context of the database is master database. To execute above two queries the context of the database has to be msdb. Tables backupset and backupfile belong to the database msdb only. Here are two workaround or solution to above problem: 1) Change context to MSDB Above two queries when they will run as following they will not error out and will give the accurate desired result. USE msdb GO SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; 2) Prefix the query with msdb There are cases above script used in stored procedure or part of big query, it is not possible to change the context of the whole query to any specific database. Use three part naming convention and prefix them with msdb. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM msdb.dbo.backupset; SELECT logical_name, backup_size, file_type FROM msdb.dbo.backupfile; Very simple solution but sometime keeps people wondering for an answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • E: mkinitramfs failure cpio 141 gzip 1

    - by Nagaraj Shindagi
    I'm using Ubuntu 12.04 LTS with Dell power-edge R720 server, facing the problem when I apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-image-3.2.0-37-generic-pae (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic-pae Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic-pae /boot/vmlinuz-3.2.0-37-generic-pae update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic-pae gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic-pae with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0 -37-generic-pae.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-37-generic-pae; however: Package linux-image-3.2.0-37-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup erro r from a previous failure. Errors were encountered while processing: linux-image-3.2.0-37-generic-pae linux-image-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1) ------------ even i tried with apt-get clean apt-get remove apt-get autoremove apt-get purge there is no difference it will show the same error message as above, even i checked the disk space ----------- Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 24030076 612456 22196964 3% / udev 16536644 4 16536640 1% /dev tmpfs 6618884 1164 6617720 1% /run none 5120 0 5120 0% /run/lock none 16547208 72 16547136 1% /run/shm cgroup 16547208 0 16547208 0% /sys/fs/cgroup /dev/sda1 93207 75034 13361 85% /boot /dev/sda10 9611492 1096076 8027176 13% /tmp /dev/sda12 9611492 226340 8896912 3% /opt /dev/sda13 9611492 152516 8970736 2% /srv /dev/sda7 9611492 592208 8531044 7% /home /dev/sda8 9611492 2656736 6466516 30% /usr /dev/sda9 9611492 696468 8426784 8% /var /dev/sda14 961237336 134563516 777845764 15% /usr/data /dev/sda15 618991384 84498388 503050052 15% /usr/data1 /dev/sda11 9611492 152616 8970636 2% /usr/local --------------- is there any problem on allotting the space to the partiations please let me know the solution its on urgent please help me on this issue regards

    Read the article

  • Stackify Aims to Put More ‘Dev’ in ‘DevOps’

    - by Matt Watson
    Originally published on VisualStudioMagazine.com on 8/22/2012 by Keith Ward.The Kansas City-based startup wants to make it easier for developers to examine the network stack and find problems in code.The first part of “DevOps” is “Dev”. But according to Matt Watson, Devs aren’t connected enough with Ops, and it’s time that changed.He founded the startup company Stackify earlier this year to do something about it. Stackify gives developers unprecedented access to the IT side of the equation, Watson says, without putting additional burden on the system and network administrators who ultimately ensure the health of the environment.“We need a product designed for developers, with the goal of getting them more involved in operations and app support. Now, there’s next to nothing designed for developers,” Watson says. Stackify allows developers to search the network stack to troubleshoot problems in their software that might otherwise take days of coordination between development and IT teams to solve.Stackify allows developers to search log files, configuration files, databases and other infrastructure to locate errors. A key to this is that the developers are normally granted read-only access, soothing admin fears that developers will upload bad code to their servers.Implementation starts with data collection on the servers. Among the information gleaned is application discovery, server monitoring, file access, and other data collection, according to Stackify’s Web site. Watson confirmed that Stackify works seamlessly with virtualized environments as well.Although the data collection software must be installed on Windows servers, it can monitor both Windows and Linux servers. Once collection’s finished, developers have the kind of information they need, without causing heartburn for the IT staff.Stackify is a 100 percent cloud-based service. The company uses Windows Azure for hosting, a decision Watson’s happy with. With Azure, he says, “It’s nice to have all the dev tools like cache and table storage.” Although there have been a few glitches here and there with the service, it’s run very smoothly for the most part, he adds.Stackify is currently in a closed beta, with a public release scheduled for October. Watson says that pricing is expected to be $25 per month, per server, with volume discounts available. He adds that the target audience is companies with at least five developers.Watson founded Stackify after selling his last company, VinSolutions, to AutoTrader.com for “close to $150 million”, according to press accounts. Watson has since  founded the Watson Technology Group, which focuses on angel investing.About the Author: Keith Ward is the editor in chief of Visual Studio Magazine.

    Read the article

  • SQLAuthority News – Book Review – Beginning T-SQL 2008 by Kathi Kellenberger

    - by pinaldave
    Beginning T-SQL 2008 by Kathi Kellenberger Amazon Link Detail Review: Beginning T-SQL 2008 is one of the best books on the market if you are just beginning to work with Microsoft SQL, or have a little bit of experience and need to learn more quickly. Each chapter of the book introduces a new subject, and builds upon topics covered in previous chapters.  The author of the book, Kathi Kellenberger understands that you need to form a solid foundation of knowledge before moving on to new topics, and sets up each subject nicely.  Because the chapters move in an orderly progression, you continue to use skills you learned earlier. One of the best features of Beginning T-SQL 2008 is that each chapter has multiple examples and exercises.  Many books introduce a topic and then never go back to it.  This book gives enough examples that you will be familiar with the subject when you come across it in real life.  The exercises at the end of the chapter mean that you will be using the skills you learned – and there is no better way to cement a subject in your brain. The book also includes discussions of the common errors that programmers will come across, how to avoid them, and how to fix them if they happen.  Ms. Kellenberger understands that not only do mistakes happen, but they are bound to happen if you aren’t trained properly.  Mistakes are part of the learning process! The book begins by discussions relational theory, so that programmers will understand the way T-SQL works from the ground up.  It also walks readers through writing accurate queries, combining set-based and procedural processing, embedding logic in stored functions, and so much more. Overall, the main goal of Beginning T-SQL 2008 is to introduce novices to SQL programming, and quickly familiarize them with the basics of running the program.  The book is written with the idea that readers will not know any of the technical terms or vocabulary.  However, if you are a little more familiar with SQL and looking to become better, you will still find this book very helpful. Ratting: 4.5+ Stars Summary: I must recommend Beginning T-SQL 2008 highly enough.  If you are going to buy any beginners guide to Transect-SQL, this is the one you should spend your money on.  You can save yourself a lot of time and effort later by using this very affordable manual to learn the basics, which will allow you to become an expert much faster. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • How to recover dpkg from corrupted downloads?

    - by rocker9455
    I just upgraded to the 10.10 RC earlier and had a few problems with graphic drivers (x didnt start) But i have remedied that now. When i run 'sudo apt-get install -f' i get this: will@UbuntuBox:/mnt/slax$ sudo apt-get install -f [sudo] password for will: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libmono-wcf3.0-cil openoffice.org-calc openoffice.org-core The following NEW packages will be installed: libmono-wcf3.0-cil The following packages will be upgraded: openoffice.org-calc openoffice.org-core 2 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 16 not fully installed or removed. Need to get 0B/32.5MB of archives. After this operation, 1,929kB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 201565 files and directories currently installed.) Preparing to replace openoffice.org-calc 1:3.2.1-6ubuntu2~10.04.1 (using .../openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb) ... Unpacking replacement openoffice.org-calc ... xz: (stdin): Compressed data is corrupt dpkg-deb: subprocess <decompress> returned error exit status 1 dpkg: error processing /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/openoffice/basis3.2/program/libscfiltli.so' dpkg: regarding .../openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb containing openoffice.org-core: openoffice.org-core conflicts with openoffice.org-calc (<< 1:3.2.1-7ubuntu1) openoffice.org-calc (version 1:3.2.1-6ubuntu2~10.04.1) is present and installed. dpkg: error processing /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): conflicting packages - not installing openoffice.org-core Unpacking libmono-wcf3.0-cil (from .../libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb) ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:0>: data error' dpkg-deb: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea how i can get the broken packages fixed? Cheers, Will

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • UnsatisfiedLinkError on xawt when running HEC-HMS.sh

    - by G.Oxsen
    I am a recent adopter of Linux and this problem has got me stumped. I use HEC-HMS and HEC-DSSVue for work on a regular basis. I have been using the widows versions in wine but they are really buggy. So I decided to try out the linux versions. the links below will take you to the download pages for these two programs. They are free programs for Hydrology and data management. Once I install them and attempt to run the shell file (HEC-HMS.sh for example) I get a ton of java errors that I do not understand. If I had to guess I would say that the java files in question can not be found. When I check to see if java is installed it is. Here is the output from the terminal from trying to run HEC-HMS.sh: Exception in thread "Thread-1" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.<clinit>(Unknown Source) at java.awt.Component.<clinit>(Unknown Source) at javax.swing.ImageIcon.<clinit>(Unknown Source) at hms.i.c(Unknown Source) at hms.i.b(Unknown Source) at hms.K.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-4" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Unknown Source) at java.awt.Toolkit.<clinit>(Unknown Source) at sun.print.CUPSPrinter.<clinit>(Unknown Source) at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at sun.print.UnixPrintServiceLookup.refreshServices(Unknown Source) at sun.print.UnixPrintServiceLookup$PrinterChangeListener.run(Unknown Source) Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class java.awt.Toolkit at java.awt.Color.<clinit>(Unknown Source) at hms.model.l.<init>(Unknown Source) at hms.model.ProjectManager.<init>(Unknown Source) at hms.Hms.<init>(Unknown Source) at hms.Hms.main(Unknown Source) Exception in thread "Thread-2" java.lang.NoClassDefFoundError: Could not initialize class sun.print.CUPSPrinter at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at javax.print.PrintServiceLookup.lookupDefaultPrintService(Unknown Source) at hms.util.f.run(Unknown Source) at java.lang.Thread.run(Unknown Source) I get similar outputs when I try to run HEC-DSSVue.sh. If anyone could shed some light on a solution I would really appreciate it. The problem turned out to be that the program needed 32 bit versions of the particular dependencies.

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >