Search Results

Search found 4828 results on 194 pages for 'strange'.

Page 182/194 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Switch case assembly level code

    - by puffadder
    Hi All, I am programming C on cygwin windows. After having done a bit of C programming and getting comfortable with the language, I wanted to look under the hood and see what the compiler is doing for the code that I write. So I wrote down a code block containing switch case statements and converted them into assembly using: gcc -S foo.c Here is the C source: switch(i) { case 1: { printf("Case 1\n"); break; } case 2: { printf("Case 2\n"); break; } case 3: { printf("Case 3\n"); break; } case 4: { printf("Case 4\n"); break; } case 5: { printf("Case 5\n"); break; } case 6: { printf("Case 6\n"); break; } case 7: { printf("Case 7\n"); break; } case 8: { printf("Case 8\n"); break; } case 9: { printf("Case 9\n"); break; } case 10: { printf("Case 10\n"); break; } default: { printf("Nothing\n"); break; } } Now the resultant assembly for the same is: movl $5, -4(%ebp) cmpl $10, -4(%ebp) ja L13 movl -4(%ebp), %eax sall $2, %eax movl L14(%eax), %eax jmp *%eax .section .rdata,"dr" .align 4 L14: .long L13 .long L3 .long L4 .long L5 .long L6 .long L7 .long L8 .long L9 .long L10 .long L11 .long L12 .text L3: movl $LC0, (%esp) call _printf jmp L2 L4: movl $LC1, (%esp) call _printf jmp L2 L5: movl $LC2, (%esp) call _printf jmp L2 L6: movl $LC3, (%esp) call _printf jmp L2 L7: movl $LC4, (%esp) call _printf jmp L2 L8: movl $LC5, (%esp) call _printf jmp L2 L9: movl $LC6, (%esp) call _printf jmp L2 L10: movl $LC7, (%esp) call _printf jmp L2 L11: movl $LC8, (%esp) call _printf jmp L2 L12: movl $LC9, (%esp) call _printf jmp L2 L13: movl $LC10, (%esp) call _printf L2: Now, in the assembly, the code is first checking the last case (i.e. case 10) first. This is very strange. And then it is copying 'i' into 'eax' and doing things that are beyond me. I have heard that the compiler implements some jump table for switch..case. Is it what this code is doing? Or what is it doing and why? Because in case of less number of cases, the code is pretty similar to that generated for if...else ladder, but when number of cases increases, this unusual-looking implementation is seen. Thanks in advance.

    Read the article

  • Should I add try/catch around when casting on an attribute of JSP implicit object?

    - by Michael Mao
    Hi all: Basically what I mean is like this: List<String[]> routes = (List<String[]>)application.getAttribute("routes"); For the above code, it tries to get an attribute named "routes" from the JSP implicit object - application. But as everyone knows, after this line of code, routes may very well contains a null - which means this application hasn't got an attribute named "routes". Is this "casting on null" good programming practice in Java or not? Basically I try to avoid exceptions such as java.io.InvalidCastException I reckon things like this are more as "heritage" of Java 1.4 when generic types were not introduced to this language. So I guess everything stored in application attributes as Objects (Similar to traditional ArrayList). And when you do "downcast", there might be invalid casts. What would you do in this case? Update: Just found that although in the implicit object application I did store a List of String arrays, when I do this : List<double[]> routes = (List<double[]>)application.getAttribute("routes"); It doesn't produce any error... And I felt not comfortable already... And even with this code: out.print(routes.get(0)); It does print out something strange : [Ljava.lang.String;@18b352f Am I printing a "pointer to String"? I can finally get an exception like this: out.print(routes.get(0)[1]); with error : java.lang.ClassCastException: [Ljava.lang.String; Because it was me to add the application attribute, I know which type should it be cast to. I feel this is not "good enough", what if I forget the exact type? I know there are still some cases where this sort of thing would happen, such as in JSP/Servlet when you do casting on session attributes. Is there a better way to do this? Before you say:"OMG, why you don't use Servlets???", I should justify my reason as this: - because my uni sucks, its server can only work with Servlets generated by JSP, and I don't really want to learn how to fix issues like that. look at my previous question on this , and this is uni homework, so I've got no other choice, forget all about war, WEB-INF,etc, but "code everything directly into JSP" - because the professors do that too. :)

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • OpenGL, problems with GL_MODELVIEW GL_PROJECTION...

    - by Marcos Roriz
    Guys, I'm trying to finish up my homework but I'm having some problems here on these models on openGL... any Idea why is my draw not happening? One thing that strange is that if I change to gluPerspective it works.. #include <GL/glut.h> #include <stdlib.h> #include <stdio.h> static int shoulder = 0; static int elbow = 0; void init(void) { glClearColor(1.0, 1.0, 1.0, 0.0); } void display(void) { glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); /* BASE */ glRotatef((GLfloat) shoulder, 0.0, 0.0, 1.0); glTranslatef(1.0, 0.0, 0.0); glPushMatrix(); //glScalef(2.0, 0.4, 1.0); glBegin(GL_QUADS); glColor3f(0, 0, 0); glVertex2f(0.0, 0.0); glVertex2f(0.0, 10.0); glVertex2f(10.0, 10.0); glVertex2f(10.0, 0.0); glEnd(); glPopMatrix(); glPopMatrix(); glutSwapBuffers(); } void reshape(int w, int h) { glViewport(0, 0, (GLsizei) w, (GLsizei) h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho((GLfloat)-w/2, (GLfloat)w/2, (GLfloat)-h/2, (GLfloat)h/2, -1.0, 1.0); // modo de projecao ortogonal glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.0, 0.0, -5.0); } void keyboard(unsigned char key, int x, int y) { switch (key) { case 's': shoulder = (shoulder + 5) % 360; glutPostRedisplay(); break; case 'S': shoulder = (shoulder - 5) % 360; glutPostRedisplay(); break; case 'e': elbow = (elbow + 5) % 360; glutPostRedisplay(); break; case 'E': elbow = (elbow - 5) % 360; glutPostRedisplay(); break; case 27: exit(0); break; default: break; } } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(800, 400); glutInitWindowPosition(100, 100); glutCreateWindow(argv[0]); init(); glutDisplayFunc(display); glutReshapeFunc(reshape); glutKeyboardFunc(keyboard); glutMainLoop(); return 0; }

    Read the article

  • How to move a kinect skeleton to another position

    - by Ewerton
    I am working on a extension method to move one skeleton to a desired position in the kinect field os view. My code receives a skeleton to be moved and the destiny position, i calculate the distance between the received skeleton hip center and the destiny position to find how much to move, then a iterate in the joint applying this factor. My code, actualy looks like this. public static Skeleton MoveTo(this Skeleton skToBeMoved, Vector4 destiny) { Joint newJoint = new Joint(); ///Based on the HipCenter (i dont know if it is reliable, seems it is.) float howMuchMoveToX = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.X - destiny.X); float howMuchMoveToY = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.Y - destiny.Y); float howMuchMoveToZ = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.Z - destiny.Z); float howMuchToMultiply = 1; // Iterate in the 20 Joints foreach (JointType item in Enum.GetValues(typeof(JointType))) { newJoint = skToBeMoved.Joints[item]; // This adjust, try to keeps the skToBeMoved in the desired position if (newJoint.Position.X < 0) howMuchToMultiply = 1; // if the point is in a negative position, carry it to a "more positive" position else howMuchToMultiply = -1; // if the point is in a positive position, carry it to a "more negative" position // applying the new values to the joint SkeletonPoint pos = new SkeletonPoint() { X = newJoint.Position.X + (howMuchMoveToX * howMuchToMultiply), Y = newJoint.Position.Y, // * (float)whatToMultiplyY, Z = newJoint.Position.Z, // * (float)whatToMultiplyZ }; newJoint.Position = pos; skToBeMoved.Joints[item] = newJoint; //if (skToBeMoved.Joints[JointType.HipCenter].Position.X < 0) //{ // if (item == JointType.HandLeft) // { // if (skToBeMoved.Joints[item].Position.X > 0) // { // } // } //} } return skToBeMoved; } Actualy, only X position is considered. Now, THE PROBLEM: If i stand in a negative position, and move my hand to a positive position, a have a strange behavior, look this image To reproduce this behaviour you could use this code using (SkeletonFrame frame = e.OpenSkeletonFrame()) { if (frame == null) return new Skeleton(); if (skeletons == null || skeletons.Length != frame.SkeletonArrayLength) { skeletons = new Skeleton[frame.SkeletonArrayLength]; } frame.CopySkeletonDataTo(skeletons); Skeleton skeletonToTest = skeletons.Where(s => s.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault(); Vector4 newPosition = new Vector4(); newPosition.X = -0.03412333f; newPosition.Y = 0.0407479f; newPosition.Z = 1.927342f; newPosition.W = 0; // ignored skeletonToTest.MoveTo(newPosition); } I know, this is simple math, but i cant figure it out why this is happen. Any help will be apreciated.

    Read the article

  • MVC Entity Framework Model not returning correct data

    - by quagland
    Hi, Run into a strange problem while writing an ASP.NET MVC site. I have a view in my SQL Server database that returns a few date ranges. The view works fine when running the query in SSMS. When the view data is returned by the Entity Framework Model, It returns the correct number of rows but some of the rows are duplicated. Here is an example of what I have done: SQL Server code: CREATE TABLE [dbo].[A]( [ID] [int] NOT NULL, [PhID] [int] NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_A] PRIMARY KEY CLUSTERED ([ID] ASC)) ON [PRIMARY] go CREATE TABLE [dbo].[B]( [PhID] [int] NOT NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_B] PRIMARY KEY CLUSTERED ( [PhID] ASC )) ON [PRIMARY] go CREATE VIEW C as SELECT A.ID, CASE WHEN A.PhID IS NULL THEN A.FromDate ELSE B.FromDate END AS FromDate, CASE WHEN A.PhID IS NULL THEN A.ToDate ELSE B.ToDate END AS ToDate FROM A LEFT OUTER JOIN B ON A.PhID = B.PhID go INSERT INTO B (PhID, FromDate, ToDate) VALUES (100, '20100615', '20100715') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, NULL, '20100101', '20100201') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, 100, '20100615', '20100715') INSERT INTO B (PhID, FromDate, ToDate) VALUES (101, '20101201', '20101231') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, NULL, '20100801', '20100901') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, 101, '20101201', '20101231') So now, if you select all from C, you get 4 separate date ranges In the Entity Framework Model (which I call 'Core'), the view 'C' is added. in MVC Controller: public class HomeController : Controller { public ActionResult Index() { CoreEntities db = new CoreEntities(); var clist = from c in db.C select c; return View(clist.ToList()); } } in MVC View: @model List<RM.Models.C> @{ foreach (RM.Models.C c in Model) { @String.Format("{0:dd-MMM-yyyy}", c.FromDate) <span>-</span> @String.Format("{0:dd-MMM-yyyy}", c.ToDate) <br /> } } When I run all this, it outputs this: 01-Jan-2010 - 01-Feb-2010 01-Jan-2010 - 01-Feb-2010 01-Aug-2010 - 01-Sep-2010 01-Aug-2010 - 01-Sep-2010 When it should do this (this is what the view returns): 01-Jan-2010 - 01-Feb-2010 15-Jun-2010 - 15-Jul-2010 01-Aug-2010 - 01-Sep-2010 01-Dec-2010 - 31-Dec-2010 Also, I've run the SQL profiler over it and according to that, the query being executed is: SELECT [Extent1].[ID] AS [ID], [Extent1].[FromDate] AS [FromDate], [Extent1].[ToDate] AS [ToDate] FROM (SELECT [C].[ID] AS [ID], [C].[FromDate] AS [FromDate], [C].[ToDate] AS [ToDate] FROM [dbo].[C] AS [C]) AS [Extent1] Which returns the correct data So it seems that the entity framework is doing something to the data in the meantime. To me, everything looks fine! Have I missed something? Cheers, Ben

    Read the article

  • Javascript regex returning true.. then false.. then true.. etc

    - by betamax
    I have a strange problem with the validation I am writing on a form. It is a 'Check Username' button next to an input. The input default value is the username for example 'betamax'. When I press 'Check Username' it passes the regex and sends the username to the server. The server behaves as expected and returns '2' to tell the javascript that they are submitting their own username. Then, when I click the button again, the regex fails. Nothing is sent to the server obviously because the regex has failed. If I press the button again, the regex passes and then the username is sent to the server. I literally cannot figure out what would be making it do this! It makes no sense to me! This is my code: $j("#username-search").click(checkUserName); function checkUserName() { var userName = $j("#username").val(); var invalidUserMsg = 'Invalid username (a-zA-Z0-9 _ - and not - or _ at beginning or end of string)'; var filter = /^[^-_]([a-z0-9-_]{4,20})[^-_]$/gi; if (filter.test(userName)) { console.log("Pass") $j.post( "/account/profile/username_check/", { q: userName }, function(data){ if(data == 0) { $j("#username-search-results").html("Error searching for username. Try again?"); } else if(data == 5) { $j("#username-search-results").html(invalidUserMsg); } else if(data == 4) { $j("#username-search-results").html("Username too short or too long."); } else if(data == 2) { $j("#username-search-results").html("This is already your username."); } else if(data == 3) { $j("#username-search-results").html("This username is taken."); } else if(data == 1){ $j("#username-search-results").html("This username is available!"); } }); } else { console.log("fail") $j("#username-search-results").html(invalidUserMsg); } return false; } The HTML: <input name="username" id="username" value="{{ user.username }}" /> <input type="button" value="Is it taken?" id="username-search"> <span id="username-search-results"></span>

    Read the article

  • CMake: Mac OS X: ld: unknown option: -soname

    - by Alex Ivasyuv
    I try to build my app with CMake on Mac OS X, I get the following error: Linking CXX shared library libsml.so ld: unknown option: -soname collect2: ld returned 1 exit status make[2]: *** [libsml.so] Error 1 make[1]: *** [CMakeFiles/sml.dir/all] Error 2 make: *** [all] Error 2 This is strange, as Mac has .dylib extension instead of .so. There's my CMakeLists.txt: cmake_minimum_required(VERSION 2.6) PROJECT (SilentMedia) SET(SourcePath src/libsml) IF (DEFINED OSS) SET(OSS_src ${SourcePath}/Media/Audio/SoundSystem/OSS/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/OSS/Mixer/Mixer.cpp ) ENDIF(DEFINED OSS) IF (DEFINED ALSA) SET(ALSA_src ${SourcePath}/Media/Audio/SoundSystem/ALSA/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/ALSA/Mixer/Mixer.cpp ) ENDIF(DEFINED ALSA) SET(SilentMedia_src ${SourcePath}/Utils/Base64/Base64.cpp ${SourcePath}/Utils/String/String.cpp ${SourcePath}/Utils/Random/Random.cpp ${SourcePath}/Media/Container/FileLoader.cpp ${SourcePath}/Media/Container/OGG/OGG.cpp ${SourcePath}/Media/PlayList/XSPF/XSPF.cpp ${SourcePath}/Media/PlayList/XSPF/libXSPF.cpp ${SourcePath}/Media/PlayList/PlayList.cpp ${OSS_src} ${ALSA_src} ${SourcePath}/Media/Audio/Audio.cpp ${SourcePath}/Media/Audio/AudioInfo.cpp ${SourcePath}/Media/Audio/AudioProxy.cpp ${SourcePath}/Media/Audio/SoundSystem/SoundSystem.cpp ${SourcePath}/Media/Audio/SoundSystem/libao/AO.cpp ${SourcePath}/Media/Audio/Codec/WAV/WAV.cpp ${SourcePath}/Media/Audio/Codec/Vorbis/Vorbis.cpp ${SourcePath}/Media/Audio/Codec/WavPack/WavPack.cpp ${SourcePath}/Media/Audio/Codec/FLAC/FLAC.cpp ) SET(SilentMedia_LINKED_LIBRARY sml vorbisfile FLAC++ wavpack ao #asound boost_thread-mt boost_filesystem-mt xspf gtest ) INCLUDE_DIRECTORIES( /usr/include /usr/local/include /usr/include/c++/4.4 /Users/alex/Downloads/boost_1_45_0 ${SilentMedia_SOURCE_DIR}/src ${SilentMedia_SOURCE_DIR}/${SourcePath} ) #link_directories( # /usr/lib # /usr/local/lib # /Users/alex/Downloads/boost_1_45_0/stage/lib #) IF(LibraryType STREQUAL "static") ADD_LIBRARY(sml-static STATIC ${SilentMedia_src}) # rename library from libsml-static.a => libsml.a SET_TARGET_PROPERTIES(sml-static PROPERTIES OUTPUT_NAME "sml") SET_TARGET_PROPERTIES(sml-static PROPERTIES CLEAN_DIRECT_OUTPUT 1) ELSEIF(LibraryType STREQUAL "shared") ADD_LIBRARY(sml SHARED ${SilentMedia_src}) # change compile optimization/debug flags # -Werror -pedantic IF(BuildType STREQUAL "Debug") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -ggdb") ELSEIF(BuildType STREQUAL "Release") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -O3 -fomit-frame-pointer") ENDIF() SET_TARGET_PROPERTIES(sml PROPERTIES CLEAN_DIRECT_OUTPUT 1) ENDIF() ### TEST ### IF(Test STREQUAL "true") ADD_EXECUTABLE (bin/TestXSPF ${SourcePath}/Test/Media/PlayLists/XSPF/TestXSPF.cpp) TARGET_LINK_LIBRARIES (bin/TestXSPF ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/test1 ${SourcePath}/Test/test.cpp) TARGET_LINK_LIBRARIES (bin/test1 ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/TestFileLoader ${SourcePath}/Test/Media/Container/FileLoader/TestFileLoader.cpp) TARGET_LINK_LIBRARIES (bin/TestFileLoader ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/testMixer ${SourcePath}/Test/testMixer.cpp) TARGET_LINK_LIBRARIES (bin/testMixer ${SilentMedia_LINKED_LIBRARY}) ENDIF (Test STREQUAL "true") ### TEST ### ADD_CUSTOM_TARGET(doc COMMAND doxygen ${SilentMedia_SOURCE_DIR}/doc/Doxyfile) There was no error on Linux. Build process: cmake -D BuildType=Debug -D LibraryType=shared . make I found, that incorrect command generate in CMakeFiles/sml.dir/link.txt. But why, as the goal of CMake is cross-platforming.. How to fix it?

    Read the article

  • How to update NSMutableDictionary. My code doesn't work.

    - by dawatson833
    I've populated an array using. arrSettings = [[NSMutableArray alloc] initWithContentsOfFile:[self settingsPath]]; The file is a plist with the root as an array and then a dictionary with three three keys defined as number. I've also tried setting the keys to string. I display the values in the plist file on a view using. diaper = [[arrSettings objectAtIndex:0] objectForKey:@"Diaper Expenses"]; oil = [[arrSettings objectAtIndex:0] objectForKey:@"Oil Used"]; tree = [[arrSettings objectAtIndex:0] objectForKey:@"Wood Used"]; This code works fine, the values in the dictionary are assigned to the variables and they are displayed. The user can make changes and then press a save button. I use this code to extract the dictionary part of the array so I can update it. The assignment to editDictionary works. I've double checked the key names including case and that is correct. editDictionary = [[NSMutableDictionary alloc] init]; editDictionary = [arrSettings objectAtIndex:0]; NSNumber *myNumber = [NSNumber numberWithFloat:diaperAmount]; [editDictionary setValue:myNumber forKey:@"Diaper Expenses"]; myNumber = [NSNumber numberWithFloat:oilAmount]; [editDictionary setValue:myNumber forKey:@"Oil Used"]; myNumber = [NSNumber numberWithFloat:treeAmount]; [editDictionary setValue:myNumber forKey:@"Wood Used"]; In this example I've used a nsnumber. But I've also tried the xxxAmount field as part of SetValue instead of creating a NSNumber. Neither implementation works. Several strange things happen. Sometimes the first two setvalue statements work, but the last setvalue fails with a EXC_BAD_ACCESS failure. Other times the first setValue fails with the same error. I have no idea why the first two sometimes work. I'm at a loss of what to do next. I've tried several implentations and none of them work. Also, in the debugger how can I display the editDictionary elements. I can see editDictionary, but I don't know how to display the individual elements.

    Read the article

  • Nested Transaction issues within custom Windows Service

    - by pdwetz
    I have a custom Windows Service I recently upgraded to use TransactionScope for nested transactions. It worked fine locally on my old dev machine (XP sp3) and on a test server (Server 2003). However, it fails on my new Windows 7 machine as well as on 2008 Server. It was targeting 2.0 framework; I tried targeting 3.5 instead, but it still fails. The strange part is really in how it fails; no exception is thrown. The service itself merely times out. I added tracing code, and it fails when opening the connection for Database lookup #2 below. I also enabled tracing for System.Transactions; it literally cuts out partway while writing the block for the failed connection. We ran a SQL trace, but only the first lookup shows up. I put in code traces, and it gets to the trace the line before the second lookup, but nothing after. I've had the same experience hitting two different SQL servers (both are SQL 2005 running on Server 2003). The connection string is utilizing a SQL account (not Windows integration). All connections are against the same database in this case, but given the nature of the code it is being escalated to MSDTC. Here's the basic code structure: TransactionOptions options = new TransactionOptions(); options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew, options)) { // Database lookup #1 TransactionOptions options = new TransactionOptions(); options.IsolationLevel = Transaction.Current != null ? Transaction.Current.IsolationLevel : System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options)) { // Database lookup #2; fails on connection.Open() // Database save (never reached) scope.Complete();<br/> } scope.Complete();<br/> } My local firewall is disabled. The service normally runs using Network Service, but I also tried my user account (same results). The short of it is that I use the same general technique widely in my web applications and haven't had any issues. I pulled out the code and ran it fine within a local Windows Form application. If anyone has any additional debugging ideas (or, even better, solutions) I'd love to hear them.

    Read the article

  • printf anomaly after "fork()"

    - by pechenie
    OS: Linux, Language: pure C I'm moving forward in learning C progpramming in general, and C programming under UNIX in a special case :D So, I detected a strange (as for me) behaviour of the printf() function after using a fork() call. Let's take a look at simple test program: #include <stdio.h> #include <system.h> int main() { int pid; printf( "Hello, my pid is %d", getpid() ); pid = fork(); if( pid == 0 ) { printf( "\nI was forked! :D" ); sleep( 3 ); } else { waitpid( pid, NULL, 0 ); printf( "\n%d was forked!", pid ); } return 0; } In this case the output looks like: Hello, my pid is 1111 I was forked! :DHello, my pid is 1111 2222 was forked! Why the second "Hello" string occured in the child's output? Yes, it is exactly what the parent printed on it's start, with the parent's pid. But! If we place '\n' character in the end of each string we got the expected output: #include <stdio.h> #include <system.h> int main() { int pid; printf( "Hello, my pid is %d\n", getpid() ); // SIC!! pid = fork(); if( pid == 0 ) { printf( "I was forked! :D" ); //removed the '\n', no matter sleep( 3 ); } else { waitpid( pid, NULL, 0 ); printf( "\n%d was forked!", pid ); } return 0; } And the output looks like: Hello, my pid is 1111 I was forked! :D 2222 was forked! Why does it happen? Is it ... ummm ... correct behaviour? Or it's a kind of the 'bug'?

    Read the article

  • Regular expression to convert ul to textindent and back, with a different attribute value for first

    - by chapmanio
    Hi, This is a related to a previous question I have asked here, see the link below for a brief description as to why I am trying to do this. Regular expression from font to span (size and colour) and back (VB.NET) Basically I need a regex replace function (or if this can be done in pure VB then that's fine) to convert all ul tags in a string to textindent tags, with a different attribute value for the first textindent tag. For example: <ul> <li>This is some text</li> <li>This is some more text</li> <li> <ul> <li>This is some indented text</li> <li>This is some more text</li> </ul> </li> <li>More text!</li> <li> <ul> <li>This is some indented text</li> <li>This is some more text</li> </ul> </li> <li>More text!</li> </ul> Will become: <textformat indent="0"> <li>This is some text</li> <li>This is some more text</li> <li> <textformat indent="20"> <li>This is some indented text</li> <li>This is some more text</li> </textformat> </li> <li>More text!</li> <li> <textformat indent="20"> <li>This is some indented text</li> <li>This is some more text</li> </textformat> </li> <li>More text!</li> </textformat> Basically I want the first ul tag to have no indenting, but all nested ul tags to have an indent of 20. I appreciate this is a strange request but hopefully that makes sense, please let me know if you have any questions. Thanks in advance.

    Read the article

  • php imap save massage to sent folder after sending

    - by user1108279
    I am stucked with this for two days. I am trying to use imap_append from PHP but no luck so far. I was able to to implement this code for single attachment but multiple attachments not working. <?php $authhost="{000.000.000.000:993/validate-cert/ssl}Sent"; $user="sadasd"; $pass="sadasd"; if ($mbox=imap_open( $authhost, $user, $pass)) { $dmy=date("d-M-Y H:i:s"); $filename="filename.pdf"; $attachment = chunk_split(base64_encode($filestring)); $boundary = "------=".md5(uniqid(rand())); $msg = ("From: Somebody\r\n" . "To: [email protected]\r\n" . "Date: $dmy\r\n" . "Subject: This is the subject\r\n" . "MIME-Version: 1.0\r\n" . "Content-Type: multipart/mixed; boundary=\"$boundary\"\r\n" . "\r\n\r\n" . "--$boundary\r\n" . "Content-Type: text/html;\r\n\tcharset=\"ISO-8859-1\"\r\n" . "Content-Transfer-Encoding: 8bit \r\n" . "\r\n\r\n" . "Hello this is a test\r\n" . "\r\n\r\n" . "--$boundary\r\n" . "Content-Transfer-Encoding: base64\r\n" . "Content-Disposition: attachment; filename=\"$filename\"\r\n" . "\r\n" . $attachment . "\r\n" . "\r\n\r\n\r\n" . "--$boundary--\r\n\r\n"); imap_append($mbox,$authhost,$msg); imap_close($mbox); } else { echo "<h1>FAIL!</h1>\n"; } ?> Now that raw code above is working but I am unable to add multiple attachments. In some cases I got message body base64 decoded and in message body lot of strange letters. Something like R0lGODlhkAEfAOYAAPSeZPONtdSIu+u2vv/La+5Rj8AhYPvWhOjtkfvi7MRImcjYse5DM6O+dOnl ..... I search all the web i still got no luck. Since I have dedicated server I also tried to implement some procmail+ postfix examples but also no luck. Can someone help?

    Read the article

  • ReadFile doesn't work asynchronously on Win7 and Win2k8

    - by f0b0s
    According to MSDN ReadFile can read data 2 different ways: synchronously and asynchronously. I need the second one. The folowing code demonstrates usage with OVERLAPPED struct: #include <windows.h> #include <stdio.h> #include <time.h> void Read() { HANDLE hFile = CreateFileA("c:\\1.avi", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if ( hFile == INVALID_HANDLE_VALUE ) { printf("Failed to open the file\n"); return; } int dataSize = 256 * 1024 * 1024; char* data = (char*)malloc(dataSize); memset(data, 0xFF, dataSize); OVERLAPPED overlapped; memset(&overlapped, 0, sizeof(overlapped)); printf("reading: %d\n", time(NULL)); BOOL result = ReadFile(hFile, data, dataSize, NULL, &overlapped); printf("sent: %d\n", time(NULL)); DWORD bytesRead; result = GetOverlappedResult(hFile, &overlapped, &bytesRead, TRUE); // wait until completion - returns immediately printf("done: %d\n", time(NULL)); CloseHandle(hFile); } int main() { Read(); } On Windows XP output is: reading: 1296651896 sent: 1296651896 done: 1296651899 It means that ReadFile didn't block and returned imediatly at the same second, whereas reading process continued for 3 seconds. It is normal async reading. But on windows 7 and windows 2008 I get following results: reading: 1296661205 sent: 1296661209 done: 1296661209. It is a behavior of sync reading. MSDN says that async ReadFile sometimes can behave as sync (when the file is compressed or encrypted for example). But the return value in this situation should be TRUE and GetLastError() == NO_ERROR. On Windows 7 I get FALSE and GetLastError() == ERROR_IO_PENDING. So WinApi tells me that it is an async call, but when I look at the test I see that it is not! I'm not the only one who found this "bug": read the comment on ReadFile MSDN page. So what's the solution? Does anybody know? It is been 14 months after Denis found this strange behavior.

    Read the article

  • A NSMutableArray is destroying my life!

    - by camilo
    EDITED to show the relevant part of the code Hi. There's a strange problem with an NSMutableArray which I'm just not understanding... Explaining: I have a NSMutableArray, defined as a property (nonatomic, retain), synthesized, and initialized with 29 elements. realSectionNames = [[NSMutableArray alloc] initWithCapacity:29]; After the initialization, I can insert elements as I wish and everything seems to be working fine. While I'm running the application, however, if I insert a new element in the array, I can print the array in the function where I inserted the element, and everything seems ok. However, when I select a row in the table, and I need to read that array, my application crashes. In fact, it cannot even print the array anymore. Is there any "magical and logical trick" everybody should know when using a NSMutableArray that a beginner like myself can be missing? Thanks a lot. I declare my array as realSectionNames = [[NSMutableArray alloc] initWithCapacity:29]; I insert objects in my array with [realSectionNames addObject:[category categoryFirstLetter]]; although I know i can also insert it with [realSectionNames insertObject:[category categoryFirstLetter] atIndex:i]; where the "i" is the first non-occupied position. After the insertion, I reload the data of my tableView. Printing the array before or after reloading the data shows it has the desired information. After that, selecting a row at the table makes the application crash. This realSectionNames is used in several UITableViewDelegate functions, but for the case it doesn't matter. What truly matters is that printing the array in the beginning of the didSelectRowAtIndexPath function crashes everything (and of course, doesn't print anything). I'm pretty sure it's in that line, for printing anything he line before works (example): NSLog(@"Anything"); NSLog(@"%@", realSectionNames); gives the output: 2010-03-24 15:16:04.146 myApplicationExperience[3527:207] Anything [Session started at 2010-03-24 15:16:04 +0000.] GNU gdb 6.3.50-20050815 (Apple version gdb-967) (Tue Jul 14 02:11:58 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 3527. Still not understanding what kind of stupidity I've done this time... maybe it's not too late to follow the career of brain surgeon?

    Read the article

  • Android database closed exception

    - by Bombastic
    I'm working on a project where I'm downloading and saving data from web to sqlite database. A few minutes ago I receive a strange exception to our server from a user which is saying that the sqlite database is already closed..and I just checked the whole file where the exception happened and I'm not calling dbHelper.close();. Here is the function where the app crashes and LogCat message : public void insertCollectionCountries(JSONObject obj, Context context) { //Insert in collection_countries if(RPCCommunicator.isServiceRunning){ Log.w("","JsonCollection - insertCollectionCountries"); ContentValues valuesCountries = new ContentValues(); try { collectionId = Integer.parseInt(obj.getString("collection_id")); dbHelper.deleteSQL("collection_countries", "collection_id=?", new String[] {Integer.toString(collectionId)}); JSONArray arrayCountries = obj.getJSONArray("country_availability"); for (int i=0; i<arrayCountries.length(); i++) { valuesCountries.put("collection_id", collectionId); String countryCode = arrayCountries.getString(i); valuesCountries.put("country_code", countryCode); dbHelper.executeQuery("collection_countries", valuesCountries); } } catch (JSONException e){ e.printStackTrace(); } } } and the error is on that line : dbHelper.executeQuery("collection_countries", valuesCountries); here is the LogCat message : java.lang.IllegalStateException: database /data/data/com.stampii.stampii/databases/stampii_sys_tpl.sqlite (conn# 0) already closed at android.database.sqlite.SQLiteDatabase.verifyDbIsOpen(SQLiteDatabase.java:2123) at android.database.sqlite.SQLiteDatabase.setTransactionSuccessful(SQLiteDatabase.java:734) at com.stampii.stampii.comm.rpc.SystemDatabaseHelper.execQuery(SystemDatabaseHelper.java:298) at com.stampii.stampii.comm.rpc.SystemDatabaseHelper.executeQuery(SystemDatabaseHelper.java:291) at com.stampii.stampii.jsonAPI.JsonCollection.insertCollectionCounries(JsonCollection.java:548) at com.stampii.stampii.jsonAPI.JsonCollection.executeInsert(JsonCollection.java:181) at com.stampii.stampii.collections.MyService.downloadCollections(MyService.java:122) at com.stampii.stampii.collections.MyService$2.run(MyService.java:74) at java.lang.Thread.run(Thread.java:1020) and function in my dbHelperClass which I'm using to insert data : public boolean executeQuery(String tableName,ContentValues values){ return execQuery(tableName,values); } private boolean execQuery(String tableName,ContentValues values){ sqliteDb = instance.getWritableDatabase(); sqliteDb.beginTransaction(); sqliteDb.insert(tableName, null, values); sqliteDb.setTransactionSuccessful(); sqliteDb.endTransaction(); return true; } Any ideas which can close my sqlite database or what can cause that exception, because I've tested this code on a few emulators with different Android versions, different devices (HTC EVO 3D, Samsung Galaxy Nexus,HTC Desire, LG OPTIMUS PAD, Samsung Galaxy S2, Samsung Galaxy Note) and it's working fine. Thanks in advance!

    Read the article

  • SSL Authentication with Certificates: Should the Certificates have a hostname?

    - by sixtyfootersdude
    Summary JBoss allows clients and servers to authenticate using certificates and ssl. One thing that seems strange is that you are not required to give your hostname on the certificate. I think that this means if Server B is in your truststore, Sever B can pretend to be any server that they want. (And likewise: if Client B is in your truststore...) Am I missing something here? Authentication Steps (Summary of Wikipeida Page) Client Server ================================================================================================= 1) Client sends Client Hello ENCRIPTION: None - highest TLS protocol supported - random number - list of cipher suites - compression methods 2) Sever Hello ENCRIPTION: None - highest TLS protocol supported - random number - choosen cipher suite - choosen compression method 3) Certificate Message ENCRIPTION: None - 4) ServerHelloDone ENCRIPTION: None 5) Certificate Message ENCRIPTION: None 6) ClientKeyExchange Message ENCRIPTION: server's public key => only server can read => if sever can read this he must own the certificate - may contain a PreMasterSecerate, public key or nothing (depends on cipher) 7) CertificateVerify Message ENCRIPTION: clients private key - purpose is to prove to the server that client owns the cert 8) BOTH CLIENT AND SERVER: - use random numbers and PreMasterSecret to compute a common secerate 9) Finished message - contains a has and MAC over previous handshakes (to ensure that those unincripted messages did not get broken) 10) Finished message - samething Sever Knows The client has the public key for the sent certificate (step 7) The client's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the server's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Client Knows The server has the public key for the sent certificate (step 6 with step 8) The server's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the client's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Potential Problem Suppose the client's truststore has certs in it: Server A Server B (malicous) Server A has hostname www.A.com Server B has hostname www.B.com Suppose: The client tries to connect to Server A but Server B launches a man in the middle attack. Since server B: has a public key for the certificate that will be sent to the client has a "valid certificate" (a cert in the truststore) And since: certificates do not have a hostname feild in them It seems like Server B can pretend to be Server A easily. Is there something that I am missing?

    Read the article

  • Stored procedure performance randomly plummets; trivial ALTER fixes it. Why?

    - by gWiz
    I have a couple of stored procedures on SQL Server 2005 that I've noticed will suddenly take a significantly long time to complete when invoked from my ASP.NET MVC app running in an IIS6 web farm of four servers. Normal, expected completion time is less than a second; unexpected anomalous completion time is 25-45 seconds. The problem doesn't seem to ever correct itself. However, if I ALTER the stored procedure (even if I don't change anything in the procedure, except to perhaps add a space to the script created by SSMS Modify command), the completion time reverts to expected completion time. IIS and SQL Server are running on separate boxes, both running Windows Server 2003 R2 Enterprise Edition. SQL Server is Standard Edition. All machines have dual Xeon E5450 3GHz CPUs and 4GB RAM. SQL Server is accessed using its TCP/IP protocol over gigabit ethernet (not sure what physical medium). The problem is present from all web servers in the web farm. When I invoke the procedure from a query window in SSMS on my development machine, the procedure completes in normal time. This is strange because I was under the impression that SSMS used the same SqlClient driver as in .NET. When I point my development instance of the web app to the production database, I again get the anomalous long completion time. If my SqlCommand Timeout is too short, I get System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Question: Why would performing ALTER on the stored procedure, without actually changing anything in it, restore the completion time to less than a second, as expected? Edit: To clarify, when the procedure is running slow for the app, it simultaneously runs fine in SSMS with the same parameters. The only difference I can discern is login credentials (next time I notice the behavior, I'll be checking from SSMS with the same creds). The ultimate goal is to get the procs to sustainably run with expected speed without requiring occasional intervention. Resolution: I wanted to to update this question in case others are experiencing this issue. Following the leads of the answers below, I was able to consistently reproduce this behavior. In order to test, I utilize sp_recompile and pass it one of the susceptible sprocs. I then initiate a website request from my browser that will invoke the sproc with atypical parameters. Lastly, I initiate a website request to a page that invokes the sproc with typical parameters, and observe that the request does not complete because of a SQL timeout on the sproc invocation. To resolve this on SQL Server 2005, I've added OPTIMIZE FOR hints to my SELECT. The sprocs that were vulnerable all have the "all-in-one" pattern described in this article. This pattern is certainly not ideal but was a necessary trade-off given the timeframe for the project.

    Read the article

  • weird performance in C++ (VC 2010)

    - by raicuandi
    Hello, I have this loop written in C++, that compiled with MSVC2010 takes a long time to run. (300ms) for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); float val = 0; for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { val += original[k*w + m] * ds[k - i + hr][m - j + hr]; } } heat_map[i*w + j] = val; } } } It seemed a bit strange to me, so I did some tests then changed a few bits to inline assembly: (specifically, the code that sums "val") for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); __asm { fldz } for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { float val = original[k*w + m] * ds[k - i + hr][m - j + hr]; __asm { fld val fadd } } } float val1; __asm { fstp val1 } heat_map[i*w + j] = val1; } } } Now it runs in half the time, 150ms. It does exactly the same thing, but why is it twice as quick? In both cases it was run in Release mode with optimizations on. Am I doing anything wrong in my original C++ code?

    Read the article

  • Android - exception from an AsynchTask call

    - by GeekedOut
    I have an Activity that makes a remote server call and tries to populate a list. The call to the server works fine, and the call returns some JSON which is good. But then the system throws this exception: 04-06 18:43:19.626: D/AndroidRuntime(2564): Shutting down VM 04-06 18:43:19.626: W/dalvikvm(2564): threadid=1: thread exiting with uncaught exception (group=0x409c01f8) 04-06 18:43:19.686: E/AndroidRuntime(2564): FATAL EXCEPTION: main 04-06 18:43:19.686: E/AndroidRuntime(2564): java.lang.NullPointerException 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ArrayAdapter.createViewFromResource(ArrayAdapter.java:394) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ArrayAdapter.getView(ArrayAdapter.java:362) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.AbsListView.obtainView(AbsListView.java:2033) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ListView.measureHeightOfChildren(ListView.java:1244) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.ListView.onMeasure(ListView.java:1155) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:4698) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1369) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.measureVertical(LinearLayout.java:660) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.onMeasure(LinearLayout.java:553) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:4698) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.FrameLayout.onMeasure(FrameLayout.java:293) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.measureVertical(LinearLayout.java:812) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.LinearLayout.onMeasure(LinearLayout.java:553) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:4698) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.widget.FrameLayout.onMeasure(FrameLayout.java:293) 04-06 18:43:19.686: E/AndroidRuntime(2564): at com.android.internal.policy.impl.PhoneWindow$DecorView.onMeasure(PhoneWindow.java:2092) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.View.measure(View.java:12723) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1064) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.view.ViewRootImpl.handleMessage(ViewRootImpl.java:2442) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.os.Handler.dispatchMessage(Handler.java:99) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.os.Looper.loop(Looper.java:137) 04-06 18:43:19.686: E/AndroidRuntime(2564): at android.app.ActivityThread.main(ActivityThread.java:4424) 04-06 18:43:19.686: E/AndroidRuntime(2564): at java.lang.reflect.Method.invokeNative(Native Method) 04-06 18:43:19.686: E/AndroidRuntime(2564): at java.lang.reflect.Method.invoke(Method.java:511) 04-06 18:43:19.686: E/AndroidRuntime(2564): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) 04-06 18:43:19.686: E/AndroidRuntime(2564): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) 04-06 18:43:19.686: E/AndroidRuntime(2564): at dalvik.system.NativeStart.main(Native Method) Why would this happen? It doesn't point to any of my code so its a bit strange. the protected void onPostExecute(String result) never gets called on the callback. Thanks!

    Read the article

  • getting Cannot identify image file when trying to create thumbnail in django

    - by Mo J. Mughrabi
    Am trying to create a thumbnail in django, am trying to build a custom class specifically to be used for generating thumbnails. As following from StringIO import StringIO from PIL import Image class Thumbnail(object): source = '' size = (50, 50) output = '' def __init__(self): pass @staticmethod def load(src): self = Thumbnail() self.source = src return self def generate(self, size=(50, 50)): if not isinstance(size, tuple): raise Exception('Thumbnail class: The size parameter must be an instance of a tuple.') self.size = size # resize properties box = self.size factor = 1 fit = True image = Image.open(self.source) # Convert to RGB if necessary if image.mode not in ('L', 'RGB'): image = image.convert('RGB') while image.size[0]/factor > 2*box[0] and image.size[1]*2/factor > 2*box[1]: factor *=2 if factor > 1: image.thumbnail((image.size[0]/factor, image.size[1]/factor), Image.NEAREST) #calculate the cropping box and get the cropped part if fit: x1 = y1 = 0 x2, y2 = image.size wRatio = 1.0 * x2/box[0] hRatio = 1.0 * y2/box[1] if hRatio > wRatio: y1 = int(y2/2-box[1]*wRatio/2) y2 = int(y2/2+box[1]*wRatio/2) else: x1 = int(x2/2-box[0]*hRatio/2) x2 = int(x2/2+box[0]*hRatio/2) image = image.crop((x1,y1,x2,y2)) #Resize the image with best quality algorithm ANTI-ALIAS image.thumbnail(box, Image.ANTIALIAS) # save image to memory temp_handle = StringIO() image.save(temp_handle, 'png') temp_handle.seek(0) self.output = temp_handle return self def get_output(self): return self.output.read() the purpose of the class is so i can use it inside different locations to generate thumbnails on the fly. The class works perfectly, I've tested it directly under a view.. I've implemented the thumbnail class inside the save method of the forms to resize the original images on saving. in my design, I have two fields for thumbnails. I was able to generate one thumbnail, if I try to generate two it crashes and I've been stuck for hours not sure whats the problem. Here is my model class Image(models.Model): article = models.ForeignKey(Article) title = models.CharField(max_length=100, null=True, blank=True) src = models.ImageField(upload_to='publication/image/') r128 = models.ImageField(upload_to='publication/image/128/', blank=True, null=True) r200 = models.ImageField(upload_to='publication/image/200/', blank=True, null=True) uploaded_at = models.DateTimeField(auto_now=True) Here is my forms class ImageForm(models.ModelForm): """ """ class Meta: model = Image fields = ('src',) def save(self, commit=True): instance = super(ImageForm, self).save(commit=True) file = Thumbnail.load(instance.src) instance.r128 = SimpleUploadedFile( instance.src.name, file.generate((128, 128)).get_output(), content_type='image/png' ) instance.r200 = SimpleUploadedFile( instance.src.name, file.generate((200, 200)).get_output(), content_type='image/png' ) if commit: instance.save() return instance the strange part is, when i remove the line which contains instance.r200 in the form save. It works fine, and it does the thumbnail and stores it successfully. Once I add the second thumbnail it fails.. Any ideas what am doing wrong here? Thanks Update: I tried earlier doing the following but I still got the same error class ImageForm(models.ModelForm): """ """ class Meta: model = Image fields = ('src',) def save(self, commit=True): instance = super(ImageForm, self).save(commit=True) instance.r128 = SimpleUploadedFile( instance.src.name, Thumbnail.load(instance.src).generate((128, 128)).get_output(), content_type='image/png' ) instance.r200 = SimpleUploadedFile( instance.src.name, Thumbnail.load(instance.src).generate((200, 200)).get_output(), content_type='image/png' ) if commit: instance.save() return instance

    Read the article

  • Urgent help needed on How to update NSMutableDictionary. My code doesn't work.

    - by dawatson833
    I've populated an array using. arrSettings = [[NSMutableArray alloc] initWithContentsOfFile:[self settingsPath]]; The file is a plist with the root as an array and then a dictionary with three three keys defined as number. I've also tried setting the keys to string. I display the values in the plist file on a view using. diaper = [[arrSettings objectAtIndex:0] objectForKey:@"Diaper Expenses"]; oil = [[arrSettings objectAtIndex:0] objectForKey:@"Oil Used"]; tree = [[arrSettings objectAtIndex:0] objectForKey:@"Wood Used"]; This code works fine, the values in the dictionary are assigned to the variables and they are displayed. The user can make changes and then press a save button. I use this code to extract the dictionary part of the array so I can update it. The assignment to editDictionary works. I've double checked the key names including case and that is correct. editDictionary = [[NSMutableDictionary alloc] init]; editDictionary = [arrSettings objectAtIndex:0]; NSNumber *myNumber = [NSNumber numberWithFloat:diaperAmount]; [editDictionary setValue:myNumber forKey:@"Diaper Expenses"]; myNumber = [NSNumber numberWithFloat:oilAmount]; [editDictionary setValue:myNumber forKey:@"Oil Used"]; myNumber = [NSNumber numberWithFloat:treeAmount]; [editDictionary setValue:myNumber forKey:@"Wood Used"]; In this example I've used a nsnumber. But I've also tried the xxxAmount field as part of SetValue instead of creating a NSNumber. Neither implementation works. Several strange things happen. Sometimes the first two setvalue statements work, but the last setvalue fails with a EXC_BAD_ACCESS failure. Other times the first setValue fails with the same error. I have no idea why the first two sometimes work. I'm at a loss of what to do next. I've tried several implentations and none of them work. Also, in the debugger how can I display the editDictionary elements. I can see editDictionary, but I don't know how to display the individual elements.

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • Backbone.js (model instanceof Model) via Chrome Extension

    - by Leoncelot
    Hey guys, This is my first time ever posting on this site and the problem I'm about to pose is difficult to articulate due to the set of variables required to arrive at it. Let me just quickly explain the framework I'm working with. I'm building a Chrome Extension using jQuery, jQuery-ui, and Backbone The entire JS suite for the extension is written in CoffeeScript and I'm utilizing Rails and the asset pipeline to manage it all. This means that when I want to deploy my extension code I run rake assets:precompile and copy the resulting compressed JS to my extensions Directory. The nice thing about this approach is that I can actually run the extension js from inside my Rails app by including the library. This is basically the same as my extensions background.js file which injects the js as a content script. Anyway, the problem I've recently encountered was when I tried testing my extension on my buddy's site, whiskeynotes.com. What I was noticing is that my backbone models were being mangled upon adding them to their respective collections. So something like this.collection.add(new SomeModel) created some nonsense version of my model. This code eventually runs into Backbone's prepareModel code _prepareModel: function(model, options) { options || (options = {}); if (!(model instanceof Model)) { var attrs = model; options.collection = this; model = new this.model(attrs, options); if (!model._validate(model.attributes, options)) model = false; } else if (!model.collection) { model.collection = this; } return model; }, Now, in most of the sites on which I've tested the extension, the result is normal, however on my buddy's site the !(model instance Model) evaluates to true even though it is actually an instance of the correct class. The consequence is a super messed up version of the model where the model's attributes is a reference to the models collection (strange right?). Needless to say, all kinds of crazy things were happening afterward. Why this is occurring is beyond me. However changing this line (!(model instanceof Model)) to (!(model instanceof Backbone.Model)) seems to fix the problem. I thought maybe it had something to do with the Flot library (jQuery graph library) creating their own version of 'Model' but looking through the source yielded no instances of it. I'm just curious as to why this would happen. And does it make sense to add this little change to the Backbone source? Update: I just realized that the "fix" doesn't actually work. I can also add that my backbone Models are namespaced in a wrapping object so that declaration looks something like class SomeNamespace.SomeModel extends Backbone.Model

    Read the article

  • Javascript self contained sandbox events and client side stack

    - by amnon
    I'm in the process of moving a JSF heavy web application to a REST and mainly JS module application . I've watched "scalable javascript application architecture" by Nicholas Zakas on yui theater (excellent video) and implemented much of the talk with good success but i have some questions : I found the lecture a little confusing in regards to the relationship between modules and sandboxes , on one had to my understanding modules should not be effected by something happening outside of their sandbox and this is why they publish events via the sandbox (and not via the core as they do access the core for hiding base libary) but each module in the application gets a new sandbox ? , shouldn't the sandbox limit events to the modoules using it ? or should events be published cross page ? e.g. : if i have two editable tables but i want to contain each one in a different sandbox and it's events effect only the modules inside that sandbox something like messabe box per table which is a different module/widget how can i do that with sandbox per module , ofcourse i can prefix the events with the moduleid but that creates coupling that i want to avoid ... and i don't want to package modules toghter as one module per combination as i already have 6-7 modules ? while i can hide the base library for small things like id selector etc.. i would still like to use the base library for module dependencies and resource loading and use something like yui loader or dojo.require so in fact i'm hiding the base library but the modules themself are defined and loaded by the base library ... seems a little strange to me libraries don't return simple js objects but usualy wrap them e.g. : u can do something like $$('.classname').each(.. which cleans the code alot , it makes no sense to wrap the base and then in the module create a dependency for the base library by executing .each but not using those features makes a lot of code written which can be left out ... and implemnting that functionality is very bug prone does anyonen have any experience with building a front side stack of this order ? how easy is it to change a base library and/or have modules from different libraries , using yui datatable but doing form validation with dojo ... ? some what of a combination of 2+4 if u choose to do something like i said and load dojo form validation widgets for inputs via yui loader would that mean dojocore is a module and the form module is dependant on it ? Thanks .

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >