Search Results

Search found 17045 results on 682 pages for 'high cpu usage'.

Page 239/682 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Am I a discoverer of a bug in the WPF engine?

    - by bitbonk
    We have a MFC 8 application compiled with /CLR that contains a larger amount of Windows Forms UserControls wich again contain WPF user controls using ElementHost. Due to the architecture of our software we can not use HwndHost directly. We observed an extremely strange behavior here that we can not make any sense of: When the CPU load is very high during startup of the application and there are a lot live of ElementHost instances, the whole property engine completely stops working. For example animations that usually just work fine now never update the values of the bound properties, they just stay at some random value after startup. When I set a property that is not bound to anything the value is correctly stored in the dependency property (calling the getter returns the new value) but the visual representation never reflects that. I set the background to red but the background color does not change. We tested this on a lot of different machines all running Windows XP SP2 and it is pretty reproducible. The funny thing here is, that there is in fact one situation where the bound properties actually pickup a new value from the animation and the visual gets updated based on the property values. It is when I resize the ElementHost or when I hide and reshow the parent native control. As soon as I do this, properties that are bound to an animation pickup a new value and the visuals rerender based on the new property values - but just once - if I want to see another update I have to resize the ElementHost. Do you have any explanation of what could be happening here or how I could approach this problem to find it out? What can I do to debug this? Is there a way I can get more information about what WPF actually does or where WPF might have crashed? To me it currently seems like a bug in WPF itself since it only happens at high CPU load at startup.

    Read the article

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • How to perform Rails model validation checks within model but outside of filters using ledermann-rails-settings and extensions

    - by user1277160
    Background I'm using ledermann-rails-settings (https://github.com/ledermann/rails-settings) on a Rails 2/3 project to extend virtually the model with certain attributes that don't necessarily need to be placed into the DB in a wide table and it's working out swimmingly for our needs. An additional reason I chose this Gem is because of the post How to create a form for the rails-settings plugin which ties ledermann-rails-settings more closely to the model for the purpose of clean form_for usage for administrator GUI support. It's a perfect solution for addressing form_for support although... Something that I'm running into now though is properly validating the dynamic getters/setters before being passed to the ledermann-rails-settings module. At the moment they are saved immediately, regardless if the model validation has actually fired - I can see through script/console that validation errors are being raised. Example For instance I would like to validate that the attribute :foo is within the range of 0..100 for decimal usage (or even a regex). I've found that with the previous post that I can use standard Rails validators (surprise, surprise) but I want to halt on actually saving any values until those are addressed - ensure that the user of the GUI has given 61.43 as a numerical value. The following code has been borrowed from the quoted post. class User < ActiveRecord::Base has_settings validates_inclusion_of :foo, :in => 0..100 def self.settings_attr_accessor(*args) >>SOME SORT OF UNLESS MODEL.VALID? CHECK HERE args.each do |method_name| eval " def #{method_name} self.settings.send(:#{method_name}) end def #{method_name}=(value) self.settings.send(:#{method_name}=, value) end " end >>END UNLESS end settings_attr_accessor :foo end Anyone have any thoughts here on pulling the state of the model at this point outside of having to put this into a before filter? The goal here is to be able to use the standard validations and avoid rolling custom validation checks for each new settings_attr_accessor that is added. Thanks!

    Read the article

  • Why does Hibernate 2nd level cache only cache queries within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session within the cache. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy = true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • Interpolating data points in Excel

    - by Niels Basjes
    Hi, I'm sure this is the kind of problem other have solved many times before. A group of people are going to do measurements (Home energy usage to be exact). All of them will do that at different times and in different intervals. So what I'll get from each person is a set of {date, value} pairs where there are dates missing in the set. What I need is a complete set of {date, value} pairs where for each date withing the range a value is known (either measured or calculated). I expect that a simple linear interpolation would suffice for this project. If I assume that it must be done in Excel. What is the best way to interpolate in such a dataset (so I have a value for every day) ? Thanks. NOTE: When these datasets are complete I'll determine the slope (i.e. usage per day) and from that we can start doing home-to-home comparisons. ADDITIONAL INFO After first few suggestions: I do not want to manually figure out where the holes are in my measurement set (too many incomplete measurement sets!!). I'm looking for something (existing) automatic to do that for me. So if my input is {2009-06-01, 10} {2009-06-03, 20} {2009-06-06, 110} Then I expect to automatically get {2009-06-01, 10} {2009-06-02, 15} {2009-06-03, 20} {2009-06-04, 50} {2009-06-05, 80} {2009-06-06, 110} Yes, I can write software that does this. I am just hoping that someone already has a "ready to run" software (Excel) feature for this (rather generic) problem.

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Memory leaks while using array of double

    - by Gacek
    I have a part of code that operates on large arrays of double (containing about 6000 elements at least) and executes several hundred times (usually 800) . When I use standard loop, like that: double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... } The memory usage rises for about 40MB (from 40MB at the beggining of the loop, to the 80MB at the end). When I force to use the garbage collector to execute at every iteration, the memory usage stays at the level of 40MB (the rise is unsignificant). double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... GC.Collect() } But the execution time is 3 times longer! (it is crucial) How can I force the C# to use the same area of memory instead of allocating new ones? Note: I have the access to the code of someObject class, so if it would be needed, I can change it.

    Read the article

  • x509 certificate Information

    - by sid
    Certificate: Data: Version: 3 (0x2) Serial Number: 95 (0x5f) Signature Algorithm: sha1WithRSAEncryption Issuer: C=, O=, CN= Validity Not Before: Apr 22 16:42:11 2008 GMT Not After : Apr 22 16:42:11 2009 GMT Subject: C=, O=, CN=, L=, ST= Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): ... ... ... Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: critical Code Signing X509v3 Authority Key Identifier: keyid: ... Signature Algorithm: sha1WithRSAEncryption a9:55:56:9b:9e:60:7a:57:fd:7:6b:1e:c0:79:1c:50:62:8f: ... ... -----BEGIN CERTIFICATE----- ... ... ... -----END CERTIFICATE----- In This Certificate, Which is the public key? is Modulus? what does the Signature Algorithm, a9:55:56:... represent (is it message digest)? And what is between -----BEGIN CERTIFICATE----- & -----END CERTIFICATE-----, is That the whole certificate? As I am novice, little bit confusing between the message digest and public key? Thanks in Advance-opensid

    Read the article

  • Custom Controls Properties - C# , Forms - :(

    - by user353600
    Hi I m adding custom control to my flowlayoutpanel , its a sort of forex data , refresh every second , so on each timer tick , i m adding a control , changing controls button text , then adding it to flowlayout panel , i m doing it at each 100ms timer tick , it takeing tooo much CPU , here is my custom Control . public partial class UserControl1 : UserControl { public UserControl1() { InitializeComponent(); } private void UserControl1_Load(object sender, EventArgs e) { } public void displaydata(string name , string back3price , string back3 , string back2price , string back2 , string back1price , string back1 , string lay3price , string lay3 , string lay2price , string lay2 , string lay1price , string lay1 ) { lblrunnerName.Text = name.ToString(); btnback3.Text = back3.ToString() + "\n" + back3price.ToString(); btnback2.Text = back2.ToString() + "\n" + back2price.ToString(); btnback1.Text = back1.ToString() + "\n" + back1price.ToString(); btnlay1.Text = lay1.ToString() + "\n" + lay1price.ToString(); btnlay2.Text = lay2.ToString() + "\n" + lay2price.ToString(); btnlay3.Text = lay3.ToString() + "\n" + lay3price.ToString(); } and here is how i m adding control; private void timer1_Tick(object sender, EventArgs e) { localhost.marketData[] md; md = ser.getM1(); flowLayoutPanel1.Controls.Clear(); foreach (localhost.marketData item in md) { UserControl1 ur = new UserControl1(); ur.Name = item.runnerName + item.runnerID; ur.displaydata(item.runnerName, item.back3price, item.back3, item.back2price, item.back2, item.back1price, item.back1, item.lay3price, item.lay3, item.lay2price, item.lay2, item.lay1price, item.lay1); flowLayoutPanel1.SuspendLayout(); flowLayoutPanel1.Controls.Add(ur); flowLayoutPanel1.ResumeLayout(); } } now its happing on 10 times on each send , taking 60% of my Core2Duo cpu . is there any other way , i can just add contols first time , and then change the text of cutom controls buttons on runtime on each refresh or timer tick i m using c# .Net

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • Boost threading/mutexs, why does this work?

    - by Flamewires
    Code: #include <iostream> #include "stdafx.h" #include <boost/thread.hpp> #include <boost/thread/mutex.hpp> using namespace std; boost::mutex mut; double results[10]; void doubler(int x) { //boost::mutex::scoped_lock lck(mut); results[x] = x*2; } int _tmain(int argc, _TCHAR* argv[]) { boost::thread_group thds; for (int x = 10; x>0; x--) { boost::thread *Thread = new boost::thread(&doubler, x); thds.add_thread(Thread); } thds.join_all(); for (int x = 0; x<10; x++) { cout << results[x] << endl; } return 0; } Output: 0 2 4 6 8 10 12 14 16 18 Press any key to continue . . . So...my question is why does this work(as far as i can tell, i ran it about 20 times), producing the above output, even with the locking commented out? I thought the general idea was: in each thread: calculate 2*x copy results to CPU register(s) store calculation in correct part of array copy results back to main(shared) memory I would think that under all but perfect conditions this would result in some part of the results array having 0 values. Is it only copying the required double of the array to a cpu register? Or is it just too short of a calculation to get preempted before it writes the result back to ram? Thanks.

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • Which OpenGL functions are not GPU-accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • one-to-many with criteria question

    - by brnzn
    enter code hereI want to apply restrictions on the list of items, so only items from a given dates will be retrieved. Here are my mappings: <class name="MyClass" table="MyTable" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="myProp" type="string" column="prop"/> <list name="items" inverse="true" cascade="none"> <key column="myId"/> <list-index column="itemVersion"/> <one-to-many class="Item"/> </list> </class> <class name="Item" table="Items" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="itemVersion" type="string" column="version"/> <property name="startDate" type="date" column="startDate"/> </class> I tried this code: Criteria crit = session.createCriteria(MyClass.class); crit.add( Restrictions.eq("myId", new Integer(1))); crit = crit.createCriteria("items").add( Restrictions.le("startDate", new Date()) ); which result the following quires: select ... from MyTable this_ inner join Items items1_ on this_.myId=items1_.myId where this_.myId=? and items1_.startDate<=? followed by select ... from Items items0_ where items0_.myId=? But what I need is something like: select ... from MyTable this_ where this_.myId=? followed by select ... from Items items0_ where items0_.myId=? and items0_.startDate<=? Any idea how I can apply a criteria on the list of items?

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • Can you dynamically combine multiple conditional functions into one in Python?

    - by erich
    I'm curious if it's possible to take several conditional functions and create one function that checks them all (e.g. the way a generator takes a procedure for iterating through a series and creates an iterator). The basic usage case would be when you have a large number of conditional parameters (e.g. "max_a", "min_a", "max_b", "min_b", etc.), many of which could be blank. They would all be passed to this "function creating" function, which would then return one function that checked them all. Below is an example of a naive way of doing what I'm asking: def combining_function(max_a, min_a, max_b, min_b, ...): f_array = [] if max_a is not None: f_array.append( lambda x: x.a < max_a ) if min_a is not None: f_array.append( lambda x: x.a > min_a ) ... return lambda x: all( [ f(x) for f in f_array ] ) What I'm wondering is what is the most efficient to achieve what's being done above? It seems like executing a function call for every function in f_array would create a decent amount of overhead, but perhaps I'm engaging in premature/unnecessary optimization. Regardless, I'd be interested to see if anyone else has come across usage cases like this and how they proceeded. Also, if this isn't possible in Python, is it possible in other (perhaps more functional) languages?

    Read the article

  • one-to-many with criteria question

    - by brnzn
    enter code hereI want to apply restrictions on the list of items, so only items from a given dates will be retrieved. Here are my mappings: <class name="MyClass" table="MyTable" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="myProp" type="string" column="prop"/> <list name="items" inverse="true" cascade="none"> <key column="myId"/> <list-index column="itemVersion"/> <one-to-many class="Item"/> </list> </class> <class name="Item" table="Items" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="itemVersion" type="string" column="version"/> <property name="startDate" type="date" column="startDate"/> </class> I tried this code: Criteria crit = session.createCriteria(MyClass.class); crit.add( Restrictions.eq("myId", new Integer(1))); crit = crit.createCriteria("items").add( Restrictions.le("startDate", new Date()) ); which result the following quires: select ... from MyTable this_ inner join Items items1_ on this_.myId=items1_.myId where this_.myId=? and items1_.startDate<=? followed by select ... from Items items0_ where items0_.myId=? But what I need is something like: select ... from MyTable this_ where this_.myId=? followed by select ... from Items items0_ where items0_.myId=? and items0_.startDate<=? Any idea how I can apply a criteria on the list of items?

    Read the article

  • When to use reinterpret_cast?

    - by HeretoLearn
    I am little confused with the applicability of reinterpret_cast vs static_cast. From what I have read the general rules are to use static cast when the types can be interpreted at compile time hence the word static. This is the cast the C++ compiler uses internally for implicit casts also. reinterpret_cast are applicable in two scenarios, convert integer types to pointer types and vice versa or to convert one pointer type to another. The general idea I get is this is unportable and should be avoided. Where I am a little confused is one usage which I need, I am calling C++ from C and the C code needs to hold on to the C++ object so basically it holds a void*. What cast should be used to convert between the void * and the Class type? I have seen usage of both static_cast and reinterpret_cast? Though from what I have been reading it appears static is better as the cast can happen at compile time? Though it says to use reinterpret_cast to convert from one pointer type to another?

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >