Search Results

Search found 8647 results on 346 pages for 'intel cpu'.

Page 318/346 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • How to modify Keyboard interrupt (under Windows XP) from a C++ Program ?

    - by rockr90
    Hi everyone ! We have been given a little project (As part of my OS course) to make a Windows program that modifies keyboard input, so that it transforms any lowercase character entered into an uppercase one (without using caps-lock) ! so when you type on the keyboard you'll see what you're typing transformed into uppercase ! I have done this quite easily using Turbo C by calling geninterrupt() and using variables _AH, _AL, i had to read a character using: _AH = 0x07; // Reading a character without echo geninterrupt(0x21); // Dos interrupt Then to transform it into an Upercase letter i have to mask the 5th bit by using: _AL = _AL & 0xDF; // Masking the entered character with 11011111 and then i will display the character using any output routine. Now, this solution will only work under old C DOS compilers. But what we intend to do is to make a close or similar solution to this by using any modern C/C++ compiler under Windows XP ! What i have first thought of is modifying the Keyboard ISR so that it masks the fifth bit of any entered character to turn it uppercase ! But i do not know how exactly to do this. Second, I wanted to create a Win32 console program to either do the same solution (but to no avail) or make a windows-compatible solution, still i do not know which functions to use ! Third I thought to make a windows program that modifies the ISR directly to suit my needs ! and i'm still looking for how to do this ! So please, If you could help me out on this, I would greatly appreciate it ! Thank you in advance ! (I'm using Windows XP on intel X86 with mingw-GCC compiler.)

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • C++ Builder 2010 Exception Dead Lock ???

    - by James
    Hello Is this some kind of exception dead lock i am facing? How to avoid it ? Have a look at below line where i have TIdContext objects of connected clients stored in an objlist and at times i need to process it. But if one user is disconnected while another thread is processing the list, then for that freed TIdContext-Data object I am getting Access voilation, Ok its fine i am using try/catch but problem is that at below line there is some kind of dead lock and process hangs , if i attach a debuger it show Access voilation Again and Again and Again, and cpu coonsumption goes up because of that exception dead lock. AnsiString UserID = ((Tmyobject*) ((TIdContext*) ObjList->Objects[i])->Data)->UserID; i know i can check before accessing the object, if object is not Null, It works.. But my question is what if once in a blue moon the Data object is freed at the point when NULL check is performed and on next line when again i am accessing the object i get same dead lock ??? So how to avoid/handle this dead lock exception ? Here is the call stack... :005F07C0 System::AnsiStringBase::AnsiStringBase(this=:0285FCE0, src=????) :0040223F System::AnsiStringT<0>::AnsiStringT<0>(this=:0285FCE0, src=:00000008) :00457996 TSomeClass::SomeFunction(this=:009D8230, UserID={ }, DataSize={ }, ) :0047BFF1 __linkproc__ ThreadProc(Thread=:009561C0) :004AD00E __linkproc__ ThreadWrapper(Parameter=:009EAA30) :7c80b729 ; C:\WINDOWS\system32\kernel32.dll Please helppppppppppppppppppppp Thanks

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Dependency Properties, change notification and setting values in the constructor

    - by stefan.at.wpf
    Hello, I have a clas with 3 dependency properties A,B,C. The values of these properties are set by the constructor and every time one of the properties A, B or C changes, the method recalculate() is called. Now during execution of the constructor these method is called 3 times, because the 3 properties A, B, C are changed. Hoewever this isn't necessary as the method recalculate() can't do anything really useful without all 3 properties set. So what's the best way for property change notification but circumventing this change notification in the constructor? I thought about adding the property changed notification in the constructor, but then each object of the DPChangeSample class would always add more and more change notifications. Thanks for any hint! class DPChangeSample : DependencyObject { public static DependencyProperty AProperty = DependencyProperty.Register("A", typeof(int), typeof(DPChangeSample), new PropertyMetadata(propertyChanged)); public static DependencyProperty BProperty = DependencyProperty.Register("B", typeof(int), typeof(DPChangeSample), new PropertyMetadata(propertyChanged)); public static DependencyProperty CProperty = DependencyProperty.Register("C", typeof(int), typeof(DPChangeSample), new PropertyMetadata(propertyChanged)); private static void propertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { ((DPChangeSample)d).recalculate(); } private void recalculate() { // Using A, B, C do some cpu intensive caluclations } public DPChangeSample(int a, int b, int c) { SetValue(AProperty, a); SetValue(BProperty, b); SetValue(CProperty, c); } }

    Read the article

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • map operator [] operands

    - by Jamie Cook
    Hi all I have the following in a member function int tt = 6; vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; set<int>& egressCandidateStops = temp.at(dest); and the following declaration of a member variable map<int, vector<set<int>>> m_egressCandidatesByDestAndOtMode; However I get an error when compiling (Intel Compiler 11.0) 1>C:\projects\svn\bdk\Source\ZenithAssignment\src\Iteration\PtBranchAndBoundIterationOriginRunner.cpp(85): error: no operator "[]" matches these operands 1> operand types are: const std::map<int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>, std::less<int>, std::allocator<std::pair<const int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>>>> [ const int ] 1> vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; 1> ^ I know it's got to be something silly but I can't see what I've done wrong.

    Read the article

  • User to kernel mode big picture?

    - by fsdfa
    I've to implement a char device, a LKM. I know some basics about OS, but I feel I don't have the big picture. In a C programm, when I call a syscall what I think it happens is that the CPU is changed to ring0, then goes to the syscall vector and jumps to a kernel memmory space function that handle it. (I think that it does int 0x80 and in eax is the offset of the syscall vector, not sure). Then, I'm in the syscall itself, but I guess that for the kernel is the same process that was before, only that it is in kernel mode, I mean the current PCB is the process that called the syscall. So far... so good?, correct me if something is wrong. Others questions... how can I write/read in process memory?. If in the syscall handler I refer to address, say, 0xbfffffff. What it means that address? physical one? Some virtual kernel one?

    Read the article

  • Progressively stream the output of an ASP.NET page - or render a page outside of an HTTP request

    - by Evgeny
    I have an ASP.NET 2.0 page with many repeating blocks, including a third-party server-side control (so it's not just plain HTML). Each is quite expensive to generate, in terms of both CPU and RAM. I'm currently using a standard Repeater control for this. There are two problems with this simple approach: The entire page must be rendered before any of it is returned to the client, so the user must wait a long time before they see any data. (I write progress messages using Response.Write, so there is feedback, but no actual results.) The ASP.NET worker process must hold everything in memory at the same time. There is no inherent needs for this: once one block is processed it won't be changed, so it could be returned to the client and the memory could be freed. I would like to somehow return these blocks to the client one at a time, as each is generated. I'm thinking of extracting the stuff inside the Repeater into a separate page and getting it repeatedly using AJAX, but there are some complications involved in that and I wonder if there is some simper approach. Ideally I'd like to keep it as one page (from the client's point of view), but return it incrementally. Another way would be to do something similar, but on the server: still create a separate page, but have the server access it and then Response.Write() the HTML it gets to the response stream for the real client request. Is there a way to avoid an HTTP request here, though? Is there some ASP.NET method that would render a UserControl or a Page outside of an HTTP request and simply return the HTML to me as a string? I'm open to other ideas on how to do this as well.

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • Programming Environment for a Motorola 68000 in Linux

    - by Nick Presta
    Greetings all, I am taking a Structure and Application of Microcomputers course this semester and we're programming with the Motorola 68000 series CPU/board. The course syllabus suggests running something like Easy68K or Teesside Motorola 68000 Assembler/Emulator at home to test our programs. I told my prof I run x64 Linux and asked what sort of environment I would need to complete my coursework. He said that the easiest environment to use is a Windows XP 32bit VM with one of the two suggested applications installed, however, he doesn't really care what I use as long as I can test what I write at home. So I'm asking if there exists some sort of emulator or environment for Linux so I can test my code, and what sort of caveats I will run into by writing and testing my code in Linux. Also, I plan to do my editing in Vim, which probably isn't a problem, but I would like any insight into editors for 68000 assembly, if you have any. Thanks! EDIT: Just to clarify - I don't want to install Linux on the board at all - I want to program on my home machine, test the code locally, and then bring it onto the board for grading/running.

    Read the article

  • The XPath @root-node-position attribute info

    - by Igor Savinkin
    I couldn't find the @root-node-position XPath attribute info. Would you give me a link of where i can read about it? Is it XPath 2.0? The code (not mine) is ../preceding-sibling::div[1]/div[@root-node-position]/div applied to this HTML: <div class="left"> <div class='prod2'> <div class='name'>Dell Latitude D610-1.73 Laptop Wireless Computer </div>2 GHz Intel Pentium M, 1 GB DDR2 SDRAM, 40 GB </div> <div class='prod1'> <div class='name'>Samsung Chromebook (Wi-Fi, 11.6-Inch) </div>1.7 GHz, 2 GB DDR3 SDRAM, 16 GB </div> </div> <div class="right"> <div class='price2'>$239.95</div> <div class='price1 best'>$249.00</div> </div> Firstly i fetch a price text under class='right' with this query : //DIV[contains(@class,'best')] and then i apply the above mentioned XPath with @root-node-attribute under class='left' to retrieve the rest of the record info.

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

  • Deploy to web container, bundle web container or embed web container...

    - by Jason
    I am developing an application that needs to be as simple as possible to install for the end user. While the end users will likely be experience Linux users (or sales engineers), they don't really know anything about Tomcat, Jetty, etc, nor do I think they should. So I see 3 ways to deploy our applications. I should also state that this is the first app that I have had to deploy that had a web interface, so I haven't really faced this question before. First is to deploy the application into an existing web container. Since we only deploy to Suse or RedHat this seems easy enough to do. However, we're not big on the idea of multiple apps running in one web container. It makes it harder to take down just one app. The next option is to just bundle Tomcat or Jetty and have the startup/shutdown scripts launch our bundled web container. Or 3rd, embed.. This will probably provide the same user experience as the second option. I'm curious what others do when faced with this problem to make it as fool proof as possible on the end user. I've almost ruled out deploying into an existing web container as we often like to set per application resource limits and CPU affinity, which I believe would affect all apps deployed into a web container/app server and not just a specific application. Thank you.

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • Unmanaged Process in Mono

    - by Residuum
    I want to start a quite expensive process (jackd) from a Mono application, and do not need full access to the process from the application itself. As the process is so expensive in terms of CPU usage, a Glib.IdleHandler for polling the process will not work, as it is never executed, and the GUI becomes unresponsive. Is there any way to have the cake and eating it at the same time in Mono? EDIT: I only need to be able to start and stop the process from Mono, I do not need information about the state of the process or if it has exited, as my application will register itself as a client to jackd, basically I need a "replacement" for bash's jackd &>/dev/null 2>&1 & for the System.Diagnostics.Process ;). Here is what I have so far for starting and stopping the process: public void StartJackd() { _jackd = new Process (); _jackd.StartInfo = _jackdStartup; if (_jackd.Start ()) { _jackd.EnableRaisingEvents = true; _jackd.Exited += JackdExited; } } public void StopJackd() { if (_jackd != null && !_jackd.HasExited) { _jackd.CloseMainWindow (); } } And somewhere else I have this code for registering the IdleHandler: GLib.Idle.Add(new GLib.IdleHandler(UpdateJackdConnections)); This handler will fire all the time, while the process is not running, but never, when jackd is running.

    Read the article

  • Different programming languages possibilities

    - by b-gen-jack-o-neill
    Hello. This should be very simple question. There are many programming languages out there, compiled into machine code or managed code. I first started with ASM back in high school. Assembler is very nice, since you know what exactly CPU does. Next, (as you can see from my other questions here) I decided to learn C and C++. I choosed C becouse from what I read it is the language with output most close to assembler-written programs. But, what I want to know is, can any other Windows programming language out there call win32 API? To be exact, like C has its special header and functions for win32 api interactions, is this assumed to be some important part of programming language? Or are there any languages that have no support for calling win32 API, or just use console to IO and some functions for basic file IO? Becouse, for Windows programming with graphic output, it is essential to have acess to win32 API. I know this question might seem silly, but still please, help me, I ask for study porposes. Thanks.

    Read the article

  • Concept: Is mongo right for applying schemas?

    - by Jan
    I am currently in charge of checking wether it is valuable for one of our upcoming products to be developed on mongo. Without going too much into detail, I'll try to explain, what the app does. The app simply has "entities". These entities are technical stuff, like cell phones, TVs, Laptops, tablet pcs, and so forth. Of course, a cell phone has other attributes than a Tablet PCs and a Laptop has even other attributes, like RAM, CPU, display size and so on. Now I want to have something that we wanna call a scheme: We define that we need to have saved the display size, amount of ram size of flash devices, processor type, processor speed and so on for tablet pcs. For cell phone we might save display size, GSM, Edge, 3g, 4g, processor, ram, touch screen technology, bla bla bla. I think you got it :) What I want to realize is, that each "category" has a schema and when one of the system's users enters a new product (let's say the new iphone 4), the app constructs the form to be filled out with the appropriate attributes. So far it sounds nice and should not be a problem with mongo. But now the tough for which I could not find a clean solution.... An attribute modeled in mongo looks like: { _id: 1234456, name: "Attribute name", type: 0, "description" } But what to do, if i need this attribute in several languages, like: { en: {name: "Attribute name", type: 0, "description"}, de: {name: "Name des Attributs, type: 0, "Beschreibung"} } I also need to ensure that the german attribute gets updated as soon as the english gets updated, for instance when type changes from 0 to 1. Any ideas on that?

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >