Search Results

Search found 4690 results on 188 pages for 'ran'.

Page 152/188 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Specializing a class template constructor

    - by SilverSun
    I'm messing around with template specialization and I ran into a problem with trying to specialize the constructor based on what policy is used. Here is the code I am trying to get to work. #include <cstdlib> #include <ctime> class DiePolicies { public: class RollOnConstruction { }; class CallMethod { }; }; #include <boost/static_assert.hpp> #include <boost/type_traits/is_same.hpp> template<unsigned sides = 6, typename RollPolicy = DiePolicies::RollOnConstruction> class Die { // policy type check BOOST_STATIC_ASSERT(( boost::is_same<RollPolicy, DiePolicies::RollOnConstruction>::value || boost::is_same<RollPolicy, DiePolicies::CallMethod>::value )); unsigned m_die; unsigned random() { return rand() % sides; } public: Die(); void roll() { m_die = random(); } operator unsigned () { return m_die + 1; } }; template<unsigned sides> Die<sides, DiePolicies::RollOnConstruction>::Die() : m_die(random()) { } template<unsigned sides> Die<sides, DiePolicies::CallMethod>::Die() : m_die(0) { } ...\main.cpp(29): error C3860: template argument list following class template name must list parameters in the order used in template parameter list ...\main.cpp(29): error C2976: 'Die' : too few template arguments ...\main.cpp(31): error C3860: template argument list following class template name must list parameters in the order used in template parameter list Those are the errors I get in Microsoft Visual Studio 2010. I'm thinking either I can't figure out the right syntax for the specialization, or maybe it isn't possible to do it this way.

    Read the article

  • ASP.NET web setup class is not defined

    - by Wayne Werner
    Hi, I've got an ASP.NET application that I installed by creating a web setup. I ran into a problem where ASP.NET wasn't registered with IIS so it gave me a "installation was interrupted" message that told me exactly nothing. Anyhow, I finally got it installed, and I can access the main page, but it's telling me that my class isn't defined. The dll is in the same directory as the Default.aspx page Here's the main error information Compiler Error Message: BC30002: Type 'SIValidator.SIValidator' is not defined. Source Error: Line 4: Line 5: <script runat="server"> Line 6: Dim validator As New SIValidator.SIValidator() Line 7: Protected table As New arrayList() Line 8: Protected countyByDistrict As New Hashtable() Version Information: Microsoft .NET Framework Version:2.0.50727.1873; ASP.NET Version:2.0.50727.1433 Am I doing it wrong? Is there some obscure setting that may not be set? I'm completely new to this VS deployment deal, so I'm trying to learn the right terms to ask the right questions... Thanks for any help edit: As an aside, when I searched google 5 minutes later, this entry came up as the first result. Would have been awesome if there was an answer for me then :P

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • jQuery "growl-like" effect in VB.net

    - by StealthRT
    Hey all, i have made a simple form that mimiks the jQuery "GROWL" effect seen here http://www.sandbox.timbenniks.com/projects/jquery-notice/ However, i have ran into a problem. If i have more than one call to the form to display a "Growl" then it just refreshes the same form with whatever call i send it. In other words, i can only display one form at a time instead of having one drop down and a new one appear above it. Here is my simple form code for the "GROWL" form: Public Class msgWindow Public howLong As Integer Public theType As String Private loading As Boolean Protected Overrides Sub OnPaint(ByVal pe As System.Windows.Forms.PaintEventArgs) Dim pn As New Pen(Color.DarkGreen) If theType = "OK" Then pn.Color = Color.DarkGreen ElseIf theType = "ERR" Then pn.Color = Color.DarkRed Else pn.Color = Color.DarkOrange End If pn.Width = 2 pe.Graphics.DrawRectangle(pn, 0, 0, Me.Width, Me.Height) pn = Nothing End Sub Public Sub showMessageBox(ByVal typeOfBox As String, ByVal theMessage As String) Me.Opacity = 0 Me.Show() Me.SetDesktopLocation(My.Computer.Screen.WorkingArea.Width - 350, 15) Me.loading = True theType = typeOfBox lblSaying.Text = theMessage If typeOfBox = "OK" Then Me.BackColor = Color.FromArgb(192, 255, 192) ElseIf typeOfBox = "ERR" Then Me.BackColor = Color.FromArgb(255, 192, 192) Else Me.BackColor = Color.FromArgb(255, 255, 192) End If If Len(theMessage) <= 30 Then howLong = 4000 ElseIf Len(theMessage) >= 31 And Len(theMessage) <= 80 Then howLong = 7000 ElseIf Len(theMessage) >= 81 And Len(theMessage) <= 100 Then howLong = 12000 Else howLong = 17000 End If Me.opacityTimer.Start() End Sub Private Sub opacityTimer_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles opacityTimer.Tick If Me.loading Then Me.Opacity += 0.07 If Me.Opacity >= 0.8 Then Me.opacityTimer.Stop() Me.opacityTimer.Dispose() Pause(howLong) Me.loading = False Me.opacityTimer.Start() End If Else Me.Opacity -= 0.08 If Me.Opacity <= 0 Then Me.opacityTimer.Stop() Me.Close() End If End If End Sub Public Sub Pause(ByVal Milliseconds As Integer) Dim dTimer As Date dTimer = Now.AddMilliseconds(Milliseconds) Do While dTimer > Now Application.DoEvents() Loop End Sub End Class I call the form by this simple call: Call msgWindow.showMessageBox("OK", "Finished searching images.") Does anyone know a way where i can have the same setup but would allow me to add any number of forms without refreshing the same form over and over again? Like always, any help would be great! :) David

    Read the article

  • Python virtualenv questions

    - by orokusaki
    I'm using VirtualEnv on Windows XP. I'm wondering if I have my brain wrapped around it correctly. I ran virtualenv ENV and it created C:\WINDOWS\system32\ENV. I then changed my PATH variable to include C:\WINDOWS\system32\ENV\Scripts instead of C:\Python27\Scripts. Then, I checked out Django into C:\WINDOWS\system32\ENV\Lib\site-packages\django-trunk, updated my PYTHON_PATH variable to point the new Django directory, and continued to easy_install other things (which of course go into my new C:\WINDOWS\system32\ENV\Lib\site-packages directory). I understand why I should use VirtualEnv so I can run multiple versions of Django, and other libraries on the same machine, but does this mean that to switch between environments I have to basically change my PATH and PYTHON_PATH variable? So, I go from developing one Django project which uses Django 1.2 in an environment called ENV and then change my PATH and such so that I can use an environment called ENV2 which has the dev version of Django? Is that basically it, or is there some better way to automatically do all this (I could update my path in Python code, but that would require me to write machine-specific code in my application)? Also, how does this process compare to using VirtualEnv on Linux (I'm quite the beginner at Linux).

    Read the article

  • SQL Server - stored procedure suddenly become slow

    - by Barguast
    I have written a stored procedure that, yesterday, typically completed in under a second. Today, it takes about 18 seconds. I ran into the problem yesterday as well, and it seemed to be solved by DROPing and re-CREATEing the stored procedure. Today, that trick doesn't appear to be working. :( Interestingly, if I copy the body of the stored procedure and execute it as a straightforward query it completes quickly. It seems to be the fact that it's a stored procedure that's slowing it down...! Does anyone know what the problem might be? I've searched for answers, but often they recommend running it through Query Analyser, but I don't have have it - I'm using SQL Server 2008 Express for now. The stored procedure is as follows; ALTER PROCEDURE [dbo].[spGetPOIs] @lat1 float, @lon1 float, @lat2 float, @lon2 float, @minLOD tinyint, @maxLOD tinyint, @exact bit AS BEGIN -- Create the query rectangle as a polygon DECLARE @bounds geography; SET @bounds = dbo.fnGetRectangleGeographyFromLatLons(@lat1, @lon1, @lat2, @lon2); -- Perform the selection if (@exact = 0) BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.Filter([Location]) = 1) END ELSE BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.STIntersects([Location]) = 1) END END The 'POI' table has an index on MinLOD, MaxLOD, and a spatial index on Location.

    Read the article

  • SQL Server performance issue.

    - by Jit
    Hi Friends, I have been trying to analyze performance issue with SQL Server 2005. We have 30 jobs, one for each databases (30 databases, one per each client). The jobs run at early morning at an interval of 5 minutes. When I run the job individually for testing, for most of the databases it finishes in 7 to 9 minutes. But when these jobs run at early morning, I see few jobs taking 2 to 3 hours to finish and the same takes few minutes as mentioned above if ran independently. We dont have any other job scheduled during that time, other than these 30 jobs. If we restart the server then for 2 or so days all the jobs finishes in few minutes, but over the period of time (from 3rd day suddenly), few jobs start taking hours to finish. What could be the possible reason of performance degradation over the period of time? I verified all the SPs and we uses temp tables and I made sure none of the temp table is left without dropping at the end of SP. Let me know what are the possible reasons for such behavior. Appreciate your time and help. Thanks

    Read the article

  • SIGABRT on iPhone when changing xib

    - by Boz
    I've just finished off an app for the iPhone which, until today, ran fine on the iPhone simulator and actual devices. I tried changing the xib which is loaded in the applicationDidFinishLaunching method in my application delegate class - all I did was change the string in initWithNibName. When I launch the app on the simulator, the Default.png image is shown, then the app crashes with an uncaught exception. When running on a device, the Default.png image is shown for about 10 seconds, the UI is never loaded and I get 'GDB: Program received signal: "SIGABRT".' on the Xcode status bar. Debugging shows that applicationDidFinishLaunching is never actually reached before the app crashes. Setting the starting xib back to the original solves the issue, but now I've made a change and saved it in the Interface Builder and the app shows the same issues as above - I've made no code changes at all. Is this a memory issue, or a known issue of a common mistake? NOTE: I've made no code changes whatsoever, and the only changes I've made to the xib are cosmetic, the IBOutlets are all intact.

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Variable reference problem when loading an object from a file in Java

    - by Snail
    I have a problem with the reference of a variable when loading a saved serialized object from a data file. All the variables referencing to the same object doesn't seem to update on the change. I've made a code snipped below that illustrates the problem. Tournament test1 = new Tournament(); Tournament test2 = test1; try { FileInputStream fis = new FileInputStream("test.out"); ObjectInputStream in = new ObjectInputStream(fis); test1 = (Tournament) in.readObject(); in.close(); } catch (IOException ex){ Logger.getLogger(Frame.class.getName()).log(Level.SEVERE, null, ex); } catch (ClassNotFoundException ex){ Logger.getLogger(Frame.class.getName()).log(Level.SEVERE, null, ex); } System.out.println("test1: " + test1); System.out.println("test2: " + test2); After this code is ran test1 and test2 doesn't reference to the same object anymore. To my knowledge they should do that since in the declaration of test2 makes it a reference to test1. When test1 is updated test2 should reflect the change and return the new object when called in the code. Am I missing something essential here or have I been misstaught about how the variable references in Java works?

    Read the article

  • Would anybody recommend learning J/K/APL?

    - by ozan
    I came across J/K/APL a few months ago while working my way through some project euler problems, and was intrigued, to say the least. For every elegant-looking 20 line python solution I produced, there'd be a gobsmacking 20 character J solution that ran in a tenth of the time. I've been keen to learn some basic J, and have made a few attempts at picking up the vocabulary, but have found the learning curve to be quite steep. To those who are familiar with these languages, would you recommend investing some time to learn one (I'm thinking J in particular)? I would do so more for the purpose of satisfying my curiosity than for career advancement or some such thing. Some personal circumstances to consider, if you care to: I love mathematics, and use it daily in my work (as a mathematician for a startup) but to be honest I don't really feel limited by the tools that I use (like python + NumPy) so I can't use that excuse. I have no particular desire to work in the finance industry, which seems to be the main port of call for K users at least. Plus I should really learn C# as a next language as it's the primary language where I work. So practically speaking, J almost definitely shouldn't be the next language I learn. I'm reasonably familiar with MATLAB so using an array-based programming language wouldn't constitute a tremendous paradigm shift. Any advice from those familiar with these languages would be much appreciated.

    Read the article

  • C++ Exception Handling

    - by user1413793
    So I was writing some code and I noticed that apart from syntactical, type, and other compile-time errors, C++ does not throw any other exceptions. So I decided to test this out with a very trivial program: #include<iostream> int main() { std::count<<5/0<<std::endl; return 1 } When I compiled it using g++, g++ gave me a warning saying I was dividing by 0. But it still compiled the code. Then when I ran it, it printed some really large arbitrary number. When I want to know is, how does C++ deal with exceptions? Integer division by 0 should be a very trivial example of when an exception should be thrown and the program should terminate. Do I have to essentially enclose my entire program in a huge try block and then catch certain exceptions? I know in Python when an exception is thrown, the program will immediately terminate and print out the error. What does C++ do? Are there even runtime exceptions which stop execution and kill the program?

    Read the article

  • Question about memory allocation when initializing char arrays in C/C++.

    - by Carlos Nunez
    Before anything, I apologize if this question has been asked before. I am programming a simple packet sniffer for a class project. For a little while, I ran into the issue where the source and destination of a packet appeared to be the same. For example, the source and destination of an Ethernet frame would be the same MAC address all of the time. I custom-made ether_ntoa(char *) because Windows does not seem to have ethernet.h like Linux does. Code snippet is below: char *ether_ntoa(u_char etheraddr[ETHER_ADDR_LEN]) { int i, j; char eout[32]; for(i = 0, j = 0; i < 5; i++) { eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = ':'; } eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = '\0'; for(i = 0; i < 17; i++) { if(eout[i] < 10) eout[i] += 0x30; else if(eout[i] < 16) eout[i] += 0x57; } return(eout); } I solved the problem by using malloc() to have the compiler assign memory (i.e. instead of char eout[32], I used char * eout; eout = (char *) malloc (32);). However, I thought that the compiler assigned different memory locations when one sized a char-array at compile time. Is this incorrect? Thanks! Carlos Nunez

    Read the article

  • Drag and Drop in Silverlight with F# and Asynchronous Workflows

    - by knotig
    Hello everyone! I'm trying to implement drag and drop in Silverlight using F# and asynchronous workflows. I'm simply trying to drag around a rectangle on the canvas, using two loops for the the two states (waiting and dragging), an idea I got from Tomas Petricek's book "Real-world Functional Programming", but I ran into a problem: Unlike WPF or WinForms, Silverlight's MouseEventArgs do not carry information about the button state, so I can't return from the drag-loop by checking if the left mouse button is no longer pressed. I only managed to solve this by introducing a mutable flag. Would anyone have a solution for this, that does not involve mutable state? Here's the relevant code part (please excuse the sloppy dragging code, which snaps the rectangle to the mouse pointer): type MainPage() as this = inherit UserControl() do Application.LoadComponent(this, new System.Uri("/SilverlightApplication1;component/Page.xaml", System.UriKind.Relative)) let layoutRoot : Canvas = downcast this.FindName("LayoutRoot") let rectangle1 : Rectangle = downcast this.FindName("Rectangle1") let mutable isDragged = false do rectangle1.MouseLeftButtonUp.Add(fun _ -> isDragged <- false) let rec drag() = async { let! args = layoutRoot.MouseMove |> Async.AwaitEvent if (isDragged) then Canvas.SetLeft(rectangle1, args.GetPosition(layoutRoot).X) Canvas.SetTop(rectangle1, args.GetPosition(layoutRoot).Y) return! drag() else return() } let wait() = async { while true do let! args = Async.AwaitEvent rectangle1.MouseLeftButtonDown isDragged <- true do! drag() } Async.StartImmediate(wait()) () Thank you very much for your time!

    Read the article

  • Files under Program Files have a split personality

    - by regularfry
    I have a Ruby application I'm installing (along with a packaged ruby interpreter) under Program Files on Windows 7 with an NSIS-built installer. In order to debug it, I edited one of the files to add some debugging statements. After that, I uninstalled the package and ran a new version of the installer which includes a new copy of the edited file, without debugging statements. Now, I can't get the new copy to load into ruby. If I run type <filename> in cmd.exe, or open the file in Notepad.exe or Firefox, I see the new version. If I run ruby -e "puts File.read('<filename>')", or open the file in emacs, I see the old version. If, in Windows Explorer, I copy the file to a new filename, everything can see the new contents at that filename. If I delete the original file and rename the copy to replace the original, the split personality returns. This situation survives a reboot, so it's not a simple matter of a file being accidentally held open. What on earth is going on here? Is there some aspect of the install process that might be checkpointing the file in a way I can revert, or at least switch off while I'm debugging the installer?

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • ResourceFilterFactory and non-Path annotated Resources

    - by tousdan
    (I'm using Jersey 1.7) I am attempting to add a ResourceFilterFactory in my project to select which filters are used per method using annotations. The ResourceFilterFactory seems to be able to filters on Resources which are annotated with the Path annotation but it would seem that it does not attempt to generate filters for the methods of the SubResourceLocator of the resources that are called. @Path("a") public class A { //sub resource locator? @Path("b") public B getB() { return new B(); } @GET public void doGet() {} } public class B { @GET public void doOtherGet() { } @Path("c") public void doInner() { } } When ran, the Filter factory will only be called for the following: AbstractResourceMethod(A#doGet) AbstractSubResourceLocator(A#getB) When I expected it to be called for every method of the sub resource. I'm currently using the following options in my web.xml; <init-param> <param-name>com.sun.jersey.spi.container.ResourceFilters</param-name> <param-value>com.my.MyResourceFilterFactory</param-value> </init-param> <init-param> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>com.my.resources</param-value> </init-param> Is my understanding of the filter factory flawed?

    Read the article

  • Web service performing asynchronous call

    - by kornelijepetak
    I have a webservice method FetchNumber() that fetches a number from a database and then returns it to the caller. But just before it returns the number to the caller, it needs to send this number to another service so instantiates and runs the BackgroundWorker whose job is to send this number to another service. public class FetchingNumberService : System.Web.Services.WebService { [WebMethod] public int FetchNumber() { int value = Database.GetNumber(); AsyncUpdateNumber async = new AsyncUpdateNumber(value); return value; } } public class AsyncUpdateNumber { public AsyncUpdateNumber(int number) { sendingNumber = number; worker = new BackgroundWorker(); worker.DoWork += asynchronousCall; worker.RunWorkerAsync(); } private void asynchronousCall(object sender, DoWorkEventArgs e) { // Sending a number to a service (which is Synchronous) here } private int sendingNumber; private BackgroundWorker worker; } I don't want to block the web service (FetchNumber()) while sending this number to another service, because it can take a long time and the caller does not care about sending the number to another service. Caller expects this to return as soon as possible. FetchNumber() makes the background worker and runs it, then finishes (while worker is running in the background thread). I don't need any progress report or return value from the background worker. It's more of a fire-and-forget concept. My question is this. Since the web service object is instantiated per method call, what happens when the called method (FetchNumber() in this case) is finished, while the background worker it instatiated and ran is still running? What happens to the background thread? When does GC collect the service object? Does this prevent the background thread from executing correctly to the end? Are there any other side-effects on the background thread? Thanks for any input.

    Read the article

  • Python raw strings and trailing back slashes.

    - by dash-tom-bang
    I ran across something once upon a time and wondered if it was a Python "bug" or at least a misfeature. I'm curious if anyone knows of any justifications for this behavior. I thought of it just now reading "Code Like a Pythonista," which has been enjoyable so far. I'm only familiar with the 2.x line of Python. Raw strings are strings that are prefixed with an r. This is great because I can use backslashes in regular expressions and I don't need to double everything everywhere. It's also handy for writing throwaway scripts on Windows, so I can use backslashes there also. (I know I can also use forward slashes, but throwaway scripts often contain content cut&pasted from elsewhere in Windows.) So great! Unless, of course, you really want your string to end with a backslash. There's no way to do that in a 'raw' string. In [9]: r'\n' Out[9]: '\\n' In [10]: r'abc\n' Out[10]: 'abc\\n' In [11]: r'abc\' ------------------------------------------------ File "<ipython console>", line 1 r'abc\' ^ SyntaxError: EOL while scanning string literal In [12]: r'abc\\' Out[12]: 'abc\\\\' So one slash before the closing quote is an error, but two slashes gives you two slashes! Certainly I'm not the only one that is bothered by this? Thoughts on why 'raw' strings are 'raw, except for slash-quote'? I mean, if I wanted to embed a single quote in there I'd just use double quotes around the string, and vice versa. If I wanted both, I'd just triple quote. If I really wanted three quotes in a row in a raw string, well, I guess I'd have to deal, but is this considered "proper behavior"?

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • How do I create efficient instance variable mutators in Matlab?

    - by Trent B
    Previously, I implemented mutators as follows, however it ran spectacularly slowly on a recursive OO algorithm I'm working on, and I suspected it may have been because I was duplicating objects on every function call... is this correct? %% Example Only obj2 = tripleAllPoints(obj1) obj.pts = obj.pts * 3; obj2 = obj1 end I then tried implementing mutators without using the output object... however, it appears that in MATLAB i can't do this - the changes won't "stick" because of a scope issue? %% Example Only tripleAllPoints(obj1) obj1.pts = obj1.pts * 3; end For application purposes, an extremely simplified version of my code (which uses OO and recursion) is below. classdef myslice properties pts % array of pts nROW % number of rows nDIM % number of dimensions subs % sub-slices end % end properties methods function calcSubs(obj) obj.subs = cell(1,obj.nROW); for i=1:obj.nROW obj.subs{i} = myslice; obj.subs{i}.pts = obj.pts(1:i,2:end); end end function vol = calcVol(obj) if obj.nROW == 1 obj.volume = prod(obj.pts); else obj.volume = 0; calcSubs(obj); for i=1:obj.nROW obj.volume = obj.volume + calcVol(obj.subs{i}); end end end end % end methods end % end classdef

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >