Search Results

Search found 6612 results on 265 pages for 'seconds'.

Page 230/265 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • Why onCreate() calling multiple times when i use Thread()?

    - by RajaReddy PolamReddy
    In my app i faced a problem with threads. i am using native code in my app. i try to load library and then calling native functions from the android code. 1. By using Threads() : PjsuaThread pjsuaThread = new PjsuaThread(); pjsuaThread.start(); thread code class PjsuaThread extends Thread { public void run() { if (pjsua_app.initApp() != 0) { // native function calling return; } else { } pjsua_app.startPjsua(ApjsuaActivity.CFG_FNAME); // native function calling finished = true; } When i use code like this, onCreate() function calling multiple times and able to load library and calling some functions properly, after some seconds onCreate calling again because of that it's crashing. 2. Using AsyncTask(): And also i used AsyncTask< for this requirement, it's crashing the application( crashing in lib code ). not able to open any functions class SipTask extends AsyncTask<Void, String, Void> { protected Void doInBackground(Void... args) { if (pjsua_app.initApp() != 0) { return null; } else { } pjsua_app.startPjsua(ApjsuaActivity.CFG_FNAME); finished = true; return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); Log.i(TAG, "On POst "); } } What is annoying is that in most cases it is not the missing library, it's tried to able to load the lib crashing in between. any one know the reason ?

    Read the article

  • What are the drawbacks of this Classing format?

    - by Keysle
    This is a 3 layer example of my classing format function __(_){return _.constructor} //class var _ = ( CLASS = function(){ this.variable = 0; this.sub = new CLASS.SUBCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } _.divePeak = function(){ alert('lvl'+this.variable); this.sub.variable += 5; } //sub class _ = ( __(_).SUBCLASS = function(){ this.variable = 1; this.sub = new CLASS.SUBCLASS.DEEPCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } //deep class _ = ( __(_).DEEPCLASS = function(){ this.variable = 2; }).prototype; _.func = function(){ alert('lvl'+this.variable); } Before you blow a gasket, let me explain myself. The purpose behind the underscores is to accelerate the time needed to specify functions for a class and also specify sub classes of a class. To me it's easier to read. I KNOW, this does interfere with underscore.js if you intend to use it in your classes. I'm sure _.js can be easily switched over to another $ymbol though ... oh wait, But I digress. Why have classes within a class? because solar.system() and social.system() mean two totally different things but it's convenient to use the same name. Why user underscores to manage the definition of the class? because "Solar.System.prototype" took me about 2 seconds to type out and 2 typos to correct. It also keeps all function names for all classes in the same column of texts, which is nice for legibility. All I'm doing is presenting my reasoning behind this method and why I came up with it. I'm 3 days into learning OO JS and I am very willing to accept that I might have messed up.

    Read the article

  • Job queueing and execute Mechanism

    - by Calm Storm
    In my webservice all method calls submits jobs to a queue. Basically these operations take long time to execute, so all these operations submit a Job to a queue and return a status saying "Submitted". Then the client keeps polling using another service method to check for the status of the job. Presently, what I do is create my own Queue, Job classes that are Serializable and persist these jobs (i.e, their serialized byte stream format) into the database. So an UpdateLogistics operation just queues up a "UpdateLogisticsJob" to the queue and returns. I have written my own JobExecutor which wakes up every N seconds, scans the database table for any existing jobs, and executes them. Note the jobs have to persisted because these jobs have to survive app-server crashes. This was done a long time ago, and I used bespoke classes for my Queues, Jobs, Executors etc. But now, I would like to know has someone done something similar before? In particular, Are there frameworks available for this ? Something in Spring/Apache etc Any framework that is easy to adapt/debug and plays well along with libraries like Spring will be great.

    Read the article

  • C++ game loop example

    - by David
    Can someone write up a source for a program that just has a "game loop", which just keeps looping until you press Esc, and the program shows a basic image. Heres the source I have right now but I have to use SDL_Delay(2000); to keep the program alive for 2 seconds, during which the program is frozen. #include "SDL.h" int main(int argc, char* args[]) { SDL_Surface* hello = NULL; SDL_Surface* screen = NULL; SDL_Init(SDL_INIT_EVERYTHING); screen = SDL_SetVideoMode(640, 480, 32, SDL_SWSURFACE); hello = SDL_LoadBMP("hello.bmp"); SDL_BlitSurface(hello, NULL, screen, NULL); SDL_Flip(screen); SDL_Delay(2000); SDL_FreeSurface(hello); SDL_Quit(); return 0; } I just want the program to be open until I press Esc. I know how the loop works, I just don't know if I implement inside the main() function, or outside of it. I've tried both, and both times it failed. If you could help me out that would be great :P

    Read the article

  • boost::asio::io_service throws exception

    - by Ace
    Okay, I seriously cannot figure this out. I have a DLL project in MSVC that is attempting to use Asio (from Boost 1.45.0), but whenever I create my io_service, an exception is thrown. Here is what I am doing for testing purposes: void run() { boost::this_thread::sleep(boost::posix_time::seconds(5)); try { boost::asio::io_service io_service; } catch (std::exception & e) { MessageBox(NULL, e.what(), "Exception", MB_OK); } } BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { if (fdwReason == DLL_PROCESS_ATTACH) { boost::thread thread(run); } return TRUE; } This is what the message box shows: winsock: WSAStartup cannot function at this time because the underlying system it uses to provide network services is currently unavailable Here is what MSDN says about it (error code 10091, WSASYSNOTREADY): Network subsystem is unavailable. This error is returned by WSAStartup if the Windows Sockets implementation cannot function at because the underlying system it uses to provide network services is currently unavailable. Users should check: That the appropriate Windows Sockets DLL file is in the current path. That they are not trying to use more than one Windows Sockets implementation simultaneously. If there is more than one Winsock DLL on your system, be sure the first one in the path is appropriate for the network subsystem currently loaded. The Windows Sockets implementation documentation to be sure all necessary components are currently installed and configured correctly. Yet none of this seems to apply to me (or so I think). Here is my command line: /O2 /GL /D "_WIN32_WINNT=0x0501" /D "_WINDLL" /FD /EHsc /MD /Gy /Fo"Release\" /Fd"Release\vc90.pdb" /W3 /WX /nologo /c /TP /errorReport:prompt If anyone knows what might be wrong, please help me out! Thanks.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Asynchronous Controller is blocking requests in ASP.NET MVC through jQuery

    - by Jason
    I have just started using the AsyncController in my project to take care of some long-running reports. Seemed ideal at the time since I could kick off the report and then perform a few other actions while waiting for it to come back and populate elements on the screen. My controller looks a bit like this. I tried to use a thread to perform the long task which I'd hoped would free up the controller to take more requests: public class ReportsController : AsyncController { public void LongRunningActionAsync() { AsyncManager.OutstandingOperations.Increment(); var newThread = new Thread(LongTask); newThread.Start(); } private void LongTask() { // Do something that takes a really long time //....... AsyncManager.OutstandingOperations.Decrement(); } public ActionResult LongRunningActionCompleted(string message) { // Set some data up on the view or something... return View(); } public JsonResult AnotherControllerAction() { // Do a quick task... return Json("..."); } } But what I am finding is that when I call LongRunningAction using the jQuery ajax request, any further requests I make after that back up behind it and are not processed until LongRunningAction completes. For example, call LongRunningAction which takes 10 seconds and then call AnotherControllerAction which is less than a second. AnotherControllerAction simply waits until LongRunningAction completes before returning a result. I've also checked the jQuery code, but this still happens if I specifically set "async: true": $.ajax({ async: true, type: "POST", url: "/Reports.aspx/LongRunningAction", dataType: "html", success: function(data, textStatus, XMLHttpRequest) { // ... }, error: function(XMLHttpRequest, textStatus, errorThrown) { // ... } }); At the moment I just have to assume that I'm using it incorrectly, but I'm hoping one of you guys can clear my mental block!

    Read the article

  • how to make it easy for users to register at my site?

    - by rob
    I want to make it dirt simple for users coming to my site to register so they can post comments, vote on things, etc. I would like for them to be able to use their facebook id, twitter id, yahoo mail id, gmail id, AIM id, msn id, or whatever else people are likely to have (not necessarily all of those, but the more the better). I want my mom to be able to do it in 30 seconds or less. (that is, no "enter your open id url here" type thing that would confuse her). I prefer they not have to pick a unique name, as that gets annoying as the site gets more users and it gets hard to find one that is unique. What is the best option here? I'm not quite sure about OpenId vs. OAuth, and whether there are other options. And I'd like it to be as simple for me, the developer, as possible (of course!). I don't want to spend forever learning some protocol, nor have to structure my whole app around this. It would be great if there was a site with sample code that is pretty easy to drop in. BTW, StackOverflow is a good example of a site that was easy for me to register for.

    Read the article

  • Long primitive or AtomicLong for a counter?

    - by Rich
    Hi I have a need for a counter of type long with the following requirements/facts: Incrementing the counter should take as little time as possible. The counter will only be written to by one thread. Reading from the counter will be done in another thread. The counter will be incremented regularly (as much as a few thousand times per second), but will only be read once every five seconds. Precise accuracy isn't essential, only a rough idea of the size of the counter is good enough. The counter is never cleared, decremented. Based upon these requirements, how would you choose to implement your counter? As a simple long, as a volatile long or using an AtomicLong? Why? At the moment I have a volatile long but was wondering whether another approach would be better. I am also incrementing my long by doing ++counter as opposed to counter++. Is this really any more efficient (as I have been led to believe elsewhere) because there is no assignment being done? Thanks in advance Rich

    Read the article

  • sql queries slower than expected

    - by neubert
    Before I show the query here are the relevant table definitions: CREATE TABLE phpbb_posts ( topic_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL, poster_id mediumint(8) UNSIGNED DEFAULT '0' NOT NULL, KEY topic_id (topic_id), KEY poster_id (poster_id), ); CREATE TABLE phpbb_topics ( topic_id mediumint(8) UNSIGNED NOT NULL auto_increment ); Here's the query I'm trying to do: SELECT p.topic_id, p.poster_id FROM phpbb_topics AS t LEFT JOIN phpbb_posts AS p ON p.topic_id = t.topic_id AND p.poster_id <> ... WHERE p.poster_id IS NULL; Basically, the query is an attempt to find all topics where the number of times someone other than the target user has posted in is zero. In other words, the topics where the only person who has posted is the target user. Problem is that query is taking a super long time. My general assumption when it comes to SQL is that JOINs of any are super fast and can be done in no time at all assuming all relevant columns are primary or foreign keys (which in this case they are). I tried out a few other queries: SELECT COUNT(1) FROM phpbb_topics AS t JOIN phpbb_posts AS p ON p.topic_id = t.topic_id; That returns 353340 pretty quickly. I then do these: SELECT COUNT(1) FROM phpbb_topics AS t JOIN phpbb_posts AS p ON p.topic_id = t.topic_id AND p.poster_id <> 77198; SELECT COUNT(1) FROM phpbb_topics AS t JOIN phpbb_posts AS p ON p.topic_id = t.topic_id WHERE p.poster_id <> 77198; And both of those take quite a while (between 15-30 seconds). If I change the < to a = it takes no time at all. Am I making some incorrect assumptions? Maybe my DB is just foobar'd?

    Read the article

  • Django Admin Running Same Query Thousands of Times for Model

    - by Tom
    Running into an odd . . . loop when trying to view a model in the Django admin. I have three related models (code trimmed for brevity, hopefully I didn't trim something I shouldn't have): class Association(models.Model): somecompany_entity_id = models.CharField(max_length=10, db_index=True) name = models.CharField(max_length=200) def __unicode__(self): return self.name class ResidentialUnit(models.Model): building = models.CharField(max_length=10) app_number = models.CharField(max_length=10) unit_number = models.CharField(max_length=10) unit_description = models.CharField(max_length=100, blank=True) association = models.ForeignKey(Association) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) def __unicode__(self): return '%s: %s, Unit %s' % (self.association, self.building, self.unit_number) class Resident(models.Model): unit = models.ForeignKey(ResidentialUnit) type = models.CharField(max_length=20, blank=True, default='') lookup_key = models.CharField(max_length=200) jenark_id = models.CharField(max_length=20, blank=True) user = models.ForeignKey(User) is_association_admin = models.BooleanField(default=False, db_index=True) show_in_contact_list = models.BooleanField(default=False, db_index=True) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) _phones = {} home_phone = None work_phone = None cell_phone = None app_number = None account_cache_key = None def __unicode__(self): return '%s' % self.user.get_full_name() It's the last model that's causing the problem. Trying to look at a Resident in the admin takes 10-20 seconds. If I take 'self.association' out of the __unicode__ method for ResidentialUnit, a resident page renders pretty quickly. Looking at it in the debug toolbar, without the association name in ResidentialUnit (which is a foreign key on Resident), the page runs 14 queries. With the association name put back in, it runs a far more impressive 4,872 queries. The strangest part is the extra queries all seem to be looking up the association name. They all come from the same line, the __unicode__ method for ResidentialUnit. Each one is the exact same thing, e.g., SELECT `residents_association`.`id`, `residents_association`.`jenark_entity_id`, `residents_association`.`name` FROM `residents_association` WHERE `residents_association`.`id` = 1096 ORDER BY `residents_association`.`name` ASC I assume I've managed to create a circular reference, but if it were truly circular, it would just die, not run 4000x and then return. Having trouble finding a good Google or StackOverflow result for this.

    Read the article

  • Performance Problems with Django's F() Object

    - by JayhawksFan93
    Has anyone else noticed performance issues using Django's F() object? I am running Windows XP SP3 and developing against the Django trunk. A snippet of the models I'm using and the query I'm building are below. When I have the F() object in place, each call to a QuerySet method (e.g. filter, exclude, order_by, distinct, etc.) takes approximately 2 seconds, but when I comment out the F() clause the calls are sub-second. I had a co-worker test it on his Ubuntu machine, and he is not experiencing the same performance issues I am with the F() clause. Anyone else seeing this behavior? class Move (models.Model): state_meaning = models.CharField( max_length=16, db_index=True, blank=True, default='' ) drop = models.ForeignKey( Org, db_index=True, null=False, default=1, related_name='as_move_drop' ) class Split(models.Model): state_meaning = models.CharField( max_length=16, db_index=True, blank=True, default='' ) move = models.ForeignKey( Move, related_name='splits' ) pickup = models.ForeignKey( Org, db_index=True, null=False, default=1, related_name='as_split_pickup' ) pickup_date = models.DateField( null=True, default=None ) drop = models.ForeignKey( Org, db_index=True, null=False, default=1, related_name='as_split_drop' ) drop_date = models.DateField( null=True, default=None, db_index=True ) def get_splits(begin_date, end_date): qs = Split.objects \ .filter(state_meaning__in=['INPROGRESS','FULFILLED'], drop=F('move__drop'), # <<< the line in question pickup_date__lte=end_date) elapsed = timer.clock() - start print 'qs1 took %.3f' % elapsed start = timer.clock() qs = qs.filter(Q(drop_date__gte=begin_date) | Q(drop_date__isnull=True)) elapsed = timer.clock() - start print 'qs2 took %.3f' % elapsed start = timer.clock() qs = qs.exclude(move__state_meaning='UNFULFILLED') elapsed = timer.clock() - start print 'qs3 took %.3f' % elapsed start = timer.clock() qs = qs.order_by('pickup_date', 'drop_date') elapsed = timer.clock() - start print 'qs7 took %.3f' % elapsed start = timer.clock() qs = qs.distinct() elapsed = timer.clock() - start print 'qs8 took %.3f' % elapsed

    Read the article

  • converting a UTC time to a local time zone in Java

    - by aloo
    I know this subject has been beaten to death but after searching for a few hours to this problem I had to ask. My Problem: do calculations on dates on a server based on the current time zone of a client app (iphone). The client app tells the server, in seconds, how far away its time zone is away from GMT. I would like to then use this information to do computation on dates in the server. The dates on the server are all stored as UTC time. So I would like to get the HOUR of a UTC Date object after it has been converted to this local time zone. My current attempt: int hours = (int) Math.floor(secondsFromGMT / (60.0 * 60.0)); int mins = (int) Math.floor((secondsFromGMT - (hours * 60.0 * 60.0)) / 60.0); String sign = hours > 0 ? "+" : "-"; Calendar now = Calendar.getInstance(); TimeZone t = TimeZone.getTimeZone("GMT" + sign + hours + ":" + mins); now.setTimeZone(t); now.setTime(someDateTimeObject); int hourOfDay = now.get(Calendar.HOUR_OF_DAY); The variables hour and mins represent the hour and mins the local time zone is away from GMT. After debugging this code - the variables hour, mins and sign are correct. The problem is hourOfDay does not return the correct hour - it is returning the hour as of UTC time and not local time. Ideas?

    Read the article

  • Client side page call/scrape?

    - by Silvre
    Here is the problem: I have a web application - a frequently changing notification system - that runs on a series of local computers. The application refreshes every couple of seconds to display the new information. The computers only display info, and do not have keyboards or ANY input device. The issue is that if the connection to the server is lost (say updates are installed and a server must be rebooted), a page not found error is displayed). We must then either reboot all computers that are running this app, OR add a keyboard and refresh the browser, OR try to access each computer remotely and refresh the browser. None of these are good options and result in a lot of frustration. I cannot change the actual application OR server environment. So what I need is some way to test the call to the application, and if an error is returned or it times out, continue trying every minute or so until the connection is reestablished. My idea is to create a client-side page scraper, that makes a JS request to the application (which displays basic HTML), and can run locally on the machine, no server required. If the scrape returns the correct content, it displays it. If not it continues to request the page until the actual page content is returned. Is this possible? What is the best way to do it?

    Read the article

  • Running a process at the Windows 7 Welcome Screen

    - by peelman
    So here's the scoop: I wrote a tiny C# app a while back that displays the hostname, ip address, imaged date, thaw status (we use DeepFreeze), current domain, and the current date/time, to display on the welcome screen of our Windows 7 lab machines. This was to replace our previous information block, which was set statically at startup and actually embedded text into the background, with something a little more dynamic and functional. The app uses a Timer to update the ip address, deepfreeze status, and clock every second, and it checks to see if a user has logged in and kills itself when it detects such a condition. If we just run it, via our startup script (set via group policy), it holds the script open and the machine never makes it to the login prompt. If we use something like the start or cmd commands to start it off under a separate shell/process, it runs until the startup script finishes, at which point Windows seems to clean up any and all child processes of the script. We're currently able to bypass that using psexec -s -d -i -x to fire it off, which lets it persist after the startup script is completed, but can be incredibly slow, adding anywhere between 5 seconds and over a minute to our startup time. We have experimented with using another C# app to start the process, via the Process class, using WMI Calls (Win32_Process and Win32_ProcessStartup) with various startup flags, etc, but all end with the same result of the script finishing and the info block process getting killed. I tinkered with rewriting the app as a service, but services were never designed to interact with the desktop, let alone the login window, and getting things operating in the right context never really seemed to work out. So for the question: Does anybody have a good way to accomplish this? Launch a task so that it would be independent of the startup script and run on top of the welcome screen?

    Read the article

  • Optimizing C++ Tree Generation

    - by cam
    Hi, I'm generating a Tic-Tac-Toe game tree (9 seconds after the first move), and I'm told it should take only a few milliseconds. So I'm trying to optimize it, I ran it through CodeAnalyst and these are the top 5 calls being made (I used bitsets to represent the Tic-Tac-Toe board): std::_Iterator_base::_Orphan_me std::bitset<9::test std::_Iterator_base::_Adopt std::bitset<9::reference::operator bool std::_Iterator_base::~_Iterator_base void BuildTreeToDepth(Node &nNode, const int& nextPlayer, int depth) { if (depth > 0) { //Calculate gameboard states int evalBoard = nNode.m_board.CalculateBoardState(); bool isFinished = nNode.m_board.isFinished(); if (isFinished || (nNode.m_board.isWinner() > 0)) { nNode.m_winCount = evalBoard; } else { Ticboard tBoard = nNode.m_board; do { int validMove = tBoard.FirstValidMove(); if (validMove != -1) { Node f; Ticboard tempBoard = nNode.m_board; tempBoard.Move(validMove, nextPlayer); tBoard.Move(validMove, nextPlayer); f.m_board = tempBoard; f.m_winCount = 0; f.m_Move = validMove; int currPlay = (nextPlayer == 1 ? 2 : 1); BuildTreeToDepth(f,currPlay, depth - 1); nNode.m_winCount += f.m_board.CalculateBoardState(); nNode.m_branches.push_back(f); } else { break; } }while(true); } } } Where should I be looking to optimize it? How should I optimize these 5 calls (I don't recognize them=.

    Read the article

  • How to implement a multi-threaded asynchronous operation?

    - by drowneath
    Here's how my current approach looks like: // Somewhere in a UI class // Called when a button called "Start" clicked MyWindow::OnStartClicked(Event &sender) { _thread = new boost::thread(boost::bind(&MyWindow::WorkToDo, this)); } MyWindow::WorkToDo() { for(int i = 1; i < 10000000; i++) { int percentage = (int)((float)i / 100000000.f); _progressBar->SetValue(percentage); _statusText->SetText("Working... %d%%", percentage); printf("Pretend to do something useful...\n"); } } // Called on every frame MyWindow::OnUpdate() { if(_thread != 0 && _thread->timed_join(boost::posix_time::seconds(0)) { _progressBar->SetValue(100); _statusText->SetText("Completed!"); delete _thread; _thread = 0; } } But I'm afraid this is far from safe since I keep getting unhandled exception at the end of the program execution. I basically want to separate a heavy task into another thread without blocking the GUI part.

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • Is there any better way to capture the screen than PIL.ImageGrab.grab()?

    - by user1474837
    I am making a screen capture program with python. My current problem is PIL.ImageGrab.grab() gives me the same output as 2 seconds later. For instance, for I think I am not being clear, in the following program, almost all the images are the same, have the same Image.tostring() value, even though I was moving my screen during the time the PIL.ImageGrab.grab loop was executing. >>> from PIL.ImageGrab import grab >>> l = [] >>> import time >>> for a in l: l.append(grab()) time.sleep(0.01) >>> for a in range(0, 30): l.append(grab()) time.sleep(0.01) >>> b = [] >>> for a in l: b.append(a.tostring()) >>> len(b) 30 >>> del l >>> last = [] >>> a = 0 >>> a = -1 >>> last = "" >>> same = -1 >>> for pic in b: if b == last: same = same + 1 last = b >>> same 28 >>> This is a problem, as all the images are the same but 1. 1 out of 30 is different. That would make for a absolutly horrable quality video. Please, tell me if there is any better quality alternative to PIL.ImageGrab.grab(). I need to capture the whole screen. Thanks!

    Read the article

  • Polling a web URL until event

    - by Jaxo
    I'm really sorry about the crappy title - if anybody has a better way of wording it, please edit it! I basically need to have a C# application run a function if the output of a URL is a certain value. For example, if the website says blue the background colour will be blue, red to make it red, etc. The problem is I don't want to spam my webserver with checks. The 4 bytes it downloads each time is negligible, but if I were to deploy this type of system on multiple computers, it would get slower and slower and the bandwidth would add up quickly. So my question is: How can my desktop application run a piece of code only when a web URL has a different output without checking each time? I can't use sockets, and any sort of LAN protocol won't end up working. My reasoning behind this potentially nefarious code is to be able to mute computers by updating a file on the website (as you may have seen in my previous question today, sorry!). I'd like it to be rather quick, and not have the refresh time minutes apart, a few seconds at the most would be ideal. How can I accomplish this? The website's code is easy, but getting the C# application to check when it changes is the part I'm stuck on. Nothing shows up on the website other than the command. Thanks!

    Read the article

  • jQuery ajax response not operating correctly

    - by mmarceau
    Ok this is frustrating... The code below works "correctly" as far as sending the email address to the SaveEmail URL and it gets saved correctly each time I change the drop down. However it only outputs the "Successful" message once, no matter how many times I change the value in the drop down. The "data" that is returned is "Successful". I would like to show the message for a couple seconds, then fade it out. It works correctly the first time I change the drop down, after that the change happens and the value gets saved, but the "Successful" message doesn't display. jQuery code: $('#AgentEmails').change(function() { var NewAddress = $('#AgentEmails').val(); $.post('SaveEmail.aspx', { email: NewAddress }, function(data) { $('#SelectMsg').html("<b>" + data + "</b>").fadeOut(); }); }); HTML code: <select ID='AgentEmails' runat='server'> <option value="[email protected]">TEST</option> </select><span id='SelectMsg'></span> What needs to be changed in my code to make this operate correctly? Thanks for the help.

    Read the article

  • Changing direction of rotation Pygame

    - by czl
    How would you change the direction of a rotating image/rect in Pygame? Applying positive and negative degree values works but it seems to only be able to rotate one direction throughout my window. Is there a way to ensure a change in direction of rotation? Perhaps change up rotation of a spinning image every 5 seconds, or if able to change the direction of the spin when hitting a X or Y axis. I've added some code below. It seems like switching movement directions is easy with rect.move_ip as long as I specify a speed and have location clause, it does what I want. Unfortunately rotation is't like that. Here I'l adding angles to make sure it spins, but no matter what I try, I'm unable to negate the rotation. def rotate_image(self): #rotate image orig_rect = self.image.get_rect() rot_image = pygame.transform.rotate(self.image, self.angle) rot_rect = orig_rect.copy() rot_rect.center = rot_image.get_rect().center rot_image = rot_image.subsurface(rot_rect).copy() return rot_image def render(self): self.screen.fill(self.bg_color) self.rect.move_ip(0,5) #Y axis movement at 5 px per frame self.angle += 5 #add 5 anglewhen the rect has not hit one of the window self.angle %= 360 if self.rect.left < 0 or self.rect.right > self.width: self.speed[0] = -self.speed[0] self.angle = -self.angle #tried to invert the angle self.angle -= 5 #trying to negate the angle rotation self.angle %= 360 self.screen.blit(self.rotate_image(),self.rect) pygame.display.flip() I would really like to know how to invert rotation of a image. You may provide your own examples.

    Read the article

  • Faster or more memory-efficient solution in Python for this Codejam problem.

    - by jeroen.vangoey
    I tried my hand at this Google Codejam Africa problem (the contest is already finished, I just did it to improve my programming skills). The Problem: You are hosting a party with G guests and notice that there is an odd number of guests! When planning the party you deliberately invited only couples and gave each couple a unique number C on their invitation. You would like to single out whoever came alone by asking all of the guests for their invitation numbers. The Input: The first line of input gives the number of cases, N. N test cases follow. For each test case there will be: One line containing the value G the number of guests. One line containing a space-separated list of G integers. Each integer C indicates the invitation code of a guest. Output For each test case, output one line containing "Case #x: " followed by the number C of the guest who is alone. The Limits: 1 = N = 50 0 < C = 2147483647 Small dataset 3 = G < 100 Large dataset 3 = G < 1000 Sample Input: 3 3 1 2147483647 2147483647 5 3 4 7 4 3 5 2 10 2 10 5 Sample Output: Case #1: 1 Case #2: 7 Case #3: 5 This is the solution that I came up with: with open('A-large-practice.in') as f: lines = f.readlines() with open('A-large-practice.out', 'w') as output: N = int(lines[0]) for testcase, i in enumerate(range(1,2*N,2)): G = int(lines[i]) for guest in range(G): codes = map(int, lines[i+1].split(' ')) alone = (c for c in codes if codes.count(c)==1) output.write("Case #%d: %d\n" % (testcase+1, alone.next())) It runs in 12 seconds on my machine with the large input. Now, my question is, can this solution be improved in Python to run in a shorter time or use less memory? The analysis of the problem gives some pointers on how to do this in Java and C++ but I can't translate those solutions back to Python.

    Read the article

  • OCR combined with font recognition?

    - by Adam
    I have a bold idea where a user could take an image like the following and in a few seconds of processing, be able to edit a document which looks roughly the same. The software would use WhatTheFont (or something similar) to recognize the fonts used, and OCR and other software to handle the font size, color, line-spacing, and of course the text content itself. In the case of the example image, there would be three separate "textboxes" produced, each starting at the upper left corner of the text, and extending as far to the bottom right as it could before running into another text box. So the user would then see something like this: (The rectangles are just used to show the boundaries of each textbox.) From here, the user would be able to edit the text in each of these boxes to create a new document. Of course there are tons of obvious uses for such an application, especially on a mobile phone with a built in camera. So my questions are the following: I doubt the answer is yes, but does anything do this already? If I'm going to try to build this, what should I write it in? Can I use Python? What would be the best OCR libraries to start with? Is there a service other than WhatTheFont for font recognition that has better API support? Anybody want to help me build it? :) etc. etc. Update: One thing I wanted to mention (but forgot) is I would also like the background to be preserved. In other words, if the example above had an image behind the text, I'd like the document to use that image with text removed. I know this complicates things a lot because that would require some image editing techniques too (something akin to Photoshop CS5' "content-aware fill"). But if we can solve diminished reality on iPhones, I think we can figure this out!

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >