Search Results

Search found 10789 results on 432 pages for 'cpu upgrade'.

Page 326/432 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • How would I UPDATE these table entries with SQL?

    - by CT
    I am working on an Asset Database problem. I enter assets into a database. Every object is an asset and has variables within the asset table. An object is also a type of asset. In this example the type is server. Here is the Query to retrieve all necessary data: SELECT asset.id ,asset.company ,asset.location ,asset.purchaseDate ,asset.purchaseOrder ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serialNumber ,server.esc ,server.warranty ,server.user ,server.prevUser ,server.cpu ,server.memory ,server.hardDrive FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = '$id' How would I write a query to update an asset?

    Read the article

  • How Do You Profile & Optimize CUDA Kernels?

    - by John Dibling
    I am somewhat familiar with the CUDA visual profiler and the occupancy spreadsheet, although I am probably not leveraging them as well as I could. Profiling & optimizing CUDA code is not like profiling & optimizing code that runs on a CPU. So I am hoping to learn from your experiences about how to get the most out of my code. There was a post recently looking for the fastest possible code to identify self numbers, and I provided a CUDA implementation. I'm not satisfied that this code is as fast as it can be, but I'm at a loss as to figure out both what the right questions are and what tool I can get the answers from. How do you identify ways to make your CUDA kernels perform faster?

    Read the article

  • C# Assembly Xna.Framework.dll does not load

    - by jbsnorro
    When trying to load Microsoft.Xna.Framework.dll from any project, it throws a FileNotFoundException. The specified module could not be found. (Exception from HRESULT: 0x8007007E), with no innerException. Even the simple code like the following throws that exception: static void Main(string[] args) { Assembly.LoadFile(@"C:\Microsoft.Xna.Framework.dll"); } I run XP x64, but I've set the platform in the configuration manager to x86, because I know it shouldn't(doesn't) work on x64 or Any CPU. I've manually added the dll file to GAC, but that didn't solve the problem. I have also tried the M$ Assembly Binding Log Viewer to see if those logs had any useful information, but they didn't. Everything, the loading etc, was a success according to them. Any suggestions? please?

    Read the article

  • Java and gstreamer-java initialisation error

    - by Mark
    I am building a small app which will play streaming audio from the internet in java (mainly internet radio stations). I have decided to use the gstreamer-java library for the sound, which uses JNA. I would like to include a check in the code, to see whether the gstreamer library has been initialised. When I have left the "Gst.init()" code out (to mimic when the library has not been initialised correctly), the application throws out the following messages: (process:21888): GLib-GObject-CRITICAL **: /build/buildd/glib2.0-2.22.3/gobject/gtype.c:2458: initialization assertion failed, use IA__g_type_init() prior to this function (process:21888): GLib-CRITICAL **: g_once_init_leave: assertion `initialization_value != 0' failed The app calls the gstreamer-java library. The error messages appear but the thread continues to run, hogging the CPU. Is there any way to catch the error or to add a check to prevent it from happening? An alternative would be to put the "Gst.init()" in the main class, but I am not sure if this would always guarantee the gstreamer library is initialised.

    Read the article

  • NTFS-compressing Virtual PC disks (on host and/or guest)

    - by nlawalker
    I'm hoping someone here can answer these definitively: Does putting a VHD file in an NTFS-compressed folder on the host improve performance of the virtual machine, diminish performance, or neither? What about using NTFS compression within the guest? Does using compresssion on either the host or the guest lead to any problems like read or write errors? If I were to put a VHD in a compressed folder on the host, would I benefit from compacting it? I've seen references to using NTFS compression on quite a few VPC "tips and tricks" blog posts, and it seems like half of them say to never do it and the other half say that not only does it save disk space but it actually can improve performance if you have a fast CPU and your primary performance bottleneck is the disk.

    Read the article

  • jQuery Validation 1.10 and MVC server-side errors

    - by sam360
    This feature used to work just fine on my website. If I added a custom error to ModelState, the input on the page would be marked as "input-validation-error" and the Html.ValidationMessage() would take care of rendering a span with the error message inside it. Due to incompatibility reasons we had to upgrade our jQuery Validation to 1.10: Now when I add a custom error to ModelState, I can debug and see that the HTML elements being rendered correctly, but as soon as the page load is completed, jQuery Validation removes the error message and set the "class" attribute of the input to "valid"! Has any one else come across this issue? UPDATE Testing further shows that the error message is shown on the screen until the field gains focus. As soon as the field gains focus jQuery Validation removes the customer server-side error message and marks the field as good.

    Read the article

  • How do you deal with the conflict between ActiveSupport::JSON and the JSON gem?

    - by Luke Francl
    I am stumped with this problem. ActiveSupport::JSON defines to_json on various core objects and so does the JSON gem. However, the implementation is not the same -- the ActiveSupport version takes arguments and the JSON gem version doesn't. I installed a gem that required the JSON gem and my app broke. The issue is that I'm using to_json in a controller that returns a list of objects, but I want to control which attributes are returned. When code anywhere in my system does require 'json' I get this error message: TypeError: wrong argument type Hash (expected Data) I tried a couple of things that I read online to fix it, but nothing worked. I ended up re-writing the gem to use ActiveSupport::JSON.decode instead of JSON.parse. This works but it's not sustainable...I can't be forking gems every time I want to use a gem that requires the JSON gem. Update: The best solution of this problem is to upgrade to Rails 2.3 or higher, which fixed it.

    Read the article

  • Automatic Hudson CI setup and plugin updates through apt?

    - by aapeli
    Hi! We've used Hudson for quite a while to implement a CI server with all the bells and whistles. The setup is quite straight forward, when installing from the provided RPMs and Debs, but through googling I haven't been able to figure out whether the plugins are installable using apt/rpm or some other package manager? The reason is that I would like to create a (meta)package for Ubuntu which would install and also update both Hudson and all the plugins through the normal upgrade mechanism. At the same time I could create a template setup for other projects, say JavaEE project needs git, cobertura and Chuck Norris plugins, while my Python project needs plugins XXX and YYY. Anybody got such a setup? As a workaround I figured setting up a number of Maven POMs, which would do the init, and later upgrades, but I feel this would require more scripting on the side, which I'm not very eager to do. Any other suggestions for this would also be appreciated.

    Read the article

  • Upgrading a SharePoint list instance that was deployed via feature

    - by Goldmember
    I'm curious how others address this issue. Using VSEWSS 1.3, I have created a site content type, a list definition (w event receivers), and a list instance. All of them are in the same WSP solution and each is activated individually via features. Now let's assume that all the features have been activated for some time, and the list instance contains a number of items (that can't be deleted). Now suppose I need to make a change to the schema.xml (inject some javascript, modify views, whatever) of the list. Is it even possible to "upgrade" the schema of the existing list instance? Otherwise I would think I'm stuck creating a new instance and copying items over.

    Read the article

  • What's the best way to measure and track performance over various calls at runtime?

    - by bitcruncher
    Hello. I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime? Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time? Many thanks. Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?

    Read the article

  • java tool for debugging

    - by user269723
    Hi experts, Currently we are studying the java based tool which is primararily Reporting tool.It was developed in 2000/2001 period and uses many open source libraries like Apache Avalon/Mx4J.Adaptor/edu.Oswego(java concurrent package) etc. Tool uses jdk 1.3.1 and goal is to upgrade to jdk 1.5.We have also been asked to remove these 'outdated' packages and replace by standard java packages if possible. Unfortunately we have the code available for study but lacks any documentation and really difficult to track the flow(Total number of classes written might be more than 1000) during debugging. Whats the best way to understand this kind of tool? any Graphical tool to see the relationship between the classes? Thanks SR

    Read the article

  • Is VS2010 Premium Worth the Price?

    - by WindyCityEagle
    I know this is somewhat subjective, but I can't find an honest answer anywhere. Everything concerning VS2010 are Microsoft marketing materials. Our small group is going to upgrade to VS2010(mostly for F# and the new threading features), but we can't decide between the Professional and Premium versions. The integrated testing features in Premium sound good, but I can' figure out if they're worth the 10x increase in cost between the two versions(Professional is ~549, Premium is ~5400). Has anyone been faced with a similar decision? What swayed you one way or the other?

    Read the article

  • How to correctly load 32-bit DLL dependencies when running a program from a batch file

    - by neilwhitaker1
    I have written a tool that references Microsoft.TeamFoundation.VersionControl.Client.dll, which is a 32-bit DLL. When I build my tool on 64-bit Windows, I set Visual Studio to specifically target X86 in order to force it to a 32-bit build. Targetting X86 instead of All-CPU's prevents me from getting a BadImageFormatException, as long as I invoke the tool directly (e.g. by typing "myTool.exe" on the command line). However, if I run a batch file that invokes the tool, I still get the exception. This happens even if the batch file runs in a 32-bit command prompt (%WINDIR%\SysWOW64\cmd.exe). What else can I do to make this work?

    Read the article

  • Slowness of Netbeans Platform Apps - how to mitigate?

    - by user559298
    Hi, We are developing a commerical application (pretty complex) in java using Netbeans IDE. We have 2 options in netbeans to create it- 1. Develop Java desktop app 2. Netbeans Platform app We have requirement that application startup and response times should be very very fast, should be modular etc. We did Proof of Technology by creating apps using both approaches mentioned above. We found Netbeans platform apps are very slow during startup and during screen navigation compared to pure Swing based desktop apps. We tried to implement suggestions provided at http://wiki.netbeans.org/Category:Performance:FAQ and in other blogs and forums to improve on speed of the app but were not successful. We feel for creating a complex desktop app Netbeans platform app would be better suited, but its not meeting our performance requirements (startup and response times, memory footprints, CPU usage guidelines etc). Can any one guide us on how to mitigate our problem of improving performance of Netbeans Platforms apps? Thanks in advance for your help. -bhan

    Read the article

  • C# Socket Server

    - by Snoopy
    In .NET 3.5 a new socket classes was released: http://msdn.microsoft.com/en-us/library/bb968780.aspx i found a sample but some questions regarding best practicses are remaining: http://code.msdn.microsoft.com/nclsamples/Wiki/View.aspx?title=Socket%20Performance m_numConnections (the maximum number of connections the sample is designed to handle simultaneously) is probably equal to the amount of cpu cores i have. m_receiveBufferSize is the size for one packet? like 8kb? how should i handle a length byte? opsToPreAlloc i dont understand. is this if i code a transparent proxy? Regarding the multithreading, what should be used? The reactive extension seem to be a good choice. Has anyone tried this in a real world project? Are there better options? I had bad experiences with the .NET thread pool in the past.

    Read the article

  • Loading tables dynamically with NHibernate

    - by Trevor Goertzen
    I'm working on a project that requires me to load tables based on table names stored in another table. More tables will be added to the DB (and by someone else), so creating NHibernate mapping files for each table isn't an option. Does anyone know if it is possible to load tables dynamically using NHibernate? Edit: I should add that I'm on .NET 2.0, so I can't use Fluent NHibernate. Thanks for the suggestion though guys. I will use that as evidence in convincing my associates to upgrade.

    Read the article

  • Major performance difference between two Oracle database instances

    - by jrdioko
    I am working with two instances of an Oracle database, call them one and two. two is running on better hardware (hard disk, memory, CPU) than one, and two is one minor version behind one in terms of Oracle version (both are 11g). Both have the exact same table table_name with exactly the same indexes defined. I load 500,000 identical rows into table_name on both instances. I then run, on both instances: delete from table_name; This command takes 30 seconds to complete on one and 40 minutes to complete on two. Doing INSERTs and UPDATEs on the two tables has similar performance differences. Does anyone have any suggestions on what could have such a drastic impact on performance between the two databases?

    Read the article

  • Help with Python structure in *nixes.

    - by user198553
    I came from a Windows background whern it comes to development environments. I'm used to run .exe's from everything I need to run and just forget. I usually code in php, javascript, css, html and python. Now, I have to use Linux at my work, in a non changeable Ubuntu 8.04, with permissions to upgrade my system using company's repositories only. I need to install Python 2.4.3 to start coding in an old legacy system. I had Python 2.5. I downloaded Python 2.4.3 tarballs, ran ./configure make and such. Everything worked out, but now the "default" installation is my system is Python2.4 instead of of Python2.5. I want help from you to change it back, and if possible, some material to read about symlinks, multiple Python installations, virtualenvs and such: everything I need to know before installing/upgrading Python modules. I installed for example the ElementTree package and don't even know in which Python installation it was installed. Thanks in advance!

    Read the article

  • What is the modern eqivalent (C++) style for the older (C-like) fscanf method?

    - by Chris_45
    What is the best option if I want to "upgrade" old C-code to newer C++ when reading a file with a semicolon delimiter: /* reading in from file C-like: */ fscanf(tFile, "%d", &mypost.nr); /*delimiter ; */ fscanf(tFile, " ;%[^;];", mypost.aftername);/* delimiter ; */ fscanf(tFile, " %[^;]", mypost.forename); /*delimiter ; */ fscanf(tFile, " ;%[^;];", mypost.dept);/*delimiter ; */ fscanf(tFile, " %[^;];", mypost.position);/* delimiter ; */ fscanf(tFile, "%d", &mypost.nr2); //eqivalent best C++ method achieving the same thing?

    Read the article

  • How to improve performance

    - by Ram
    Hi, In one of mine applications I am dealing with graphics objects. I am using open source GPC library to clip/merge two shapes. To improve accuracy I am sampling (adding multiple points between two edges) existing shapes. But before displaying back the merged shape I need to remove all the points between two edges. But I am not able to find an efficient algorithm that will remove all points between two edges which has same slope with minimum CPU utilization. Currently all points are of type PointF Any pointer on this will be a great help.

    Read the article

  • Is it possible to use DLR in a .NET 3.5 website project?

    - by Aplato
    I'm trying to evaluate an expression stored in a database i.e. "if (Q1 ==2) {result = 3.1;} elseif (Q1 ==3){result=4.1;} else result = 5.9;" Rather than parsing it myself I'm trying to use the DLR. I'm using version .92 from the Codeplex repository and my solution is a .NET 3.5 website; and I'm having conflicts between the System.Core and Microsoft.Scripting.ExtenstionAttribute .dll's. Error = { Description: "'ExtensionAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'.", File: "InternalXmlHelper.vb" } At this time I cannot upgrade to .NET 4.0 and make significant use of the .net 3.5 features (so downgrading is not an option). Any help greatly appreciated.

    Read the article

  • The Cleanest Reset for ARM Processor

    - by waffleman
    Lately, I've been cleaning up some some C code that runs on an ARM7 controller. In some situations (upgrade, fatal error, etc...) the program will perform a reset. Presently it just jumps to 0 and assumes that the start-up code will reinitialize everything correctly. It got me to thinking about what would be the best procedure a la "Leave No Trace" for an ARM reset. Here is my first crack at it: void Reset(void) { /* Disable interrupts */ __disable_interrupts(); /* Reset peripherals, externals and processor */ AT91C_BASE_RSTC->RSTC_RCR = AT91C_RSTC_KEY | AT91C_RSTC_PERRST | AT91C_RSTC_EXTRST| AT91C_RSTC_PROCRST; while(AT91C_BASE_RSTC->RSTC_RSR & AT91C_RSTC_SRCMP); /* Jump to the reset vector */ (*(void(*)())0)(); } Anything I haven't considered?

    Read the article

  • Using XCode and instruments to improve iPhone app performance

    - by MrDatabase
    I've been experimenting with Instruments off and on for a while and and I still can't do the following (with any sensible results): determine or estimate the average runtime of a function that's called many times. For example if I'm driving my gameLoop at 60 Hz with a CADisplayLink I'd like to see how long the loop takes to run on average... 10 ms? 30 ms etc. I've come close with the "CPU activity" instrument but the results are inconsistent or don't make sense. The time profiler seems promising but all I can get is "% of runtime"... and I'd like an actual runtime.

    Read the article

  • Upgrading Visual Studio web service project says to "convert to web application."

    - by Buggieboy
    I have a Visual Studio 2003 web service project that I have to upgrade to Visual Studio 2008. After I have run the conversion wizard, I get this message: You have completed the first step in converting your Visual Studio .NET 2003 web project. To complete the conversion, please select your project in the Solution Explorer and choose the 'Convert to Web Application' context menu item. I got this message with another project, which was originally a "web site", rather than an ASP.NET "web application". It made sense to in that case (sort of). Why, however, would I not just want to have this project remain a web service project? Additionally, when I follow the instructions and select "Convert to Web Application" from the context menu, I don't get any feedback that anything has changed. Should it have? If so, what?

    Read the article

  • Vista not supporting trueSpeech!

    - by csmba
    My web application has (server side) a wav file saved using truespeech and then a client (ie6, ie7) can ask to plat the file and the web server serves it.. All you need on xp is to have WMP 9 or higher and it all works. but on vista.. suddenly the vista client box can't play the file cause it doesn't support truespeech (some upgrade!) anyone has an idea of what should I do? You can suggest a way to make it work on the client (but in general, I don't like having to say that the solution includes installing anything on the client box) you can suggest I don't save the server side file in truespeech, and instead use something else (what?)

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >