Search Results

Search found 8647 results on 346 pages for 'intel cpu'.

Page 334/346 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • response.redirect to classic asp failing {Unable to evaluate expression because the code is optimize

    - by jeff
    I have the following code pasted below. For some reason, the response.redirect seems to be failing and it is maxing out the cpu on my server and just doesn't do anything. The .net code uploads the file fine, but does not redirect to the asp page to do the processing. I know this is absolute rubbish why would you have .net code redirecting to classic asp, it is a legacy app. I have tried putting false or true etc. at the end of the redirect as I have read other people have had issues with this. Please help as it's driving me insane! It's so strange, it runs locally on my machine but won't run on my server! I am getting the following error when I debugged remotely. {Unable to evaluate expression because the code is optimized or a native frame is on top of the call stack.} (UPDATED) After debugging remotely and taking the redirect out of the try catch, I have found that the redirect is trying to get to the correct location but after it leaves the redirect is just seems to get lost. (almost as if it can't navigate away from the cobra_import project) back up a level to COBRA/pages. Why is this??? This has worked previously!!! public void btnUploadTheFile_Click(object Source, EventArgs evArgs) { //need to check that the uploaded file is an xls file. string strFileNameOnServer = "PJI3.txt"; string strBaseLocation = ConfigurationSettings.AppSettings["str_file_location"]; if ("" == strFileNameOnServer) { txtOutput.InnerHtml = "Error - a file name must be specified."; return; } if (null != uplTheFile.PostedFile) { try { uplTheFile.PostedFile.SaveAs(strBaseLocation+strFileNameOnServer); txtOutput.InnerHtml = "File <b>" + strBaseLocation+strFileNameOnServer+"</b> uploaded successfully"; Response.Redirect ("/COBRA/pages/sap_import_pji3_prc.asp"); } catch (Exception e) { txtOutput.InnerHtml = "Error saving <b>" + strBaseLocation+strFileNameOnServer+"</b><br>"+ e.ToString(); } } }

    Read the article

  • Another C datatypes question

    - by b-gen-jack-o-neill
    Hello. Well, I completely get the most basic datatypes of C, like short, int, long, float, to be exact, all numerical types.These types are needed to be known perform right operations with right numbers. For example to use FPU to add two float numbers. So the compiler must know what the type is. But, when it comes to characters I am little bit off. I know that basic C datatype char is there for ASCII characters coding. But what I don´t know is, why you even need another datatype for characters. Why could not you just use 1 byte integer value to store ASCII character. If you call printf, you apecify the datatype in the call, so you could say to printf that the integer represents ASCII character. I dont know how cout resolves datatype, but I guess you could just specify it somehow. Another thing is, when you want to use Unicode, you must use datatype wchar. But, what if I would like to use some another, for example ISO, or Windows coding instead of UTF? Becouse wchar codes characters as UTF-16 or UTF-32 (I read its compiler specific). And, what if I would want to use for example some imaginary new 8 byte text coding? What datatype should I use for it? I am actually pretty confused of this, becouse I always expected that if I want to use UTF-32 instead of ASCII, I just tell compiler "get UTF-32 value of the character I typed and save it into 4 char field." I thought that text coding is to be dealt with by the end, print function for example. That I just need to specify the coding for the compiler to use, since Windows doesent use ASCII in win32 apps, I guess C compiler must convert the char I typed to ASCII from whatever the type is that windows sends to the C editor. And the last thing is, what if I want to use for example 25 Byte integer for some high math operations? C has no specify-yourself datatype. Yes, I know that this would be difficult since all the math operations would need to be changed, becouse CPU can not add 25 Bytes numbers together. But is there a way to do it? Or is there some math library for it? What if I want to compute Pi to 1000000000000000 digits? :) I know my question is pretty long, but I just wanted to explain my thoughts the best I can in English, since its not my native language it is difficult. And I believe there is simple answer to my question(s), something I missed that explains everything. I read lot about text coding, C tutorials, but nothing about his. Thank you for your time.

    Read the article

  • C# DLL Deployed in COM+. Error while accessing the methods.

    - by Dakshinamurthy
    I have the C# Dll (ABService) deployed in COM + and my os Is windows 2008. I have given the strong name for this dll and its dependent dll’s When I access the method of this dll through localhost or if I add the reference to the client project the method are executed successfully. Simply if I access the dll from the same machine with the reference it is working. So I think there is no problem with the way I deployed in the COM +. I have the doubt whether I have the problem in OS and Visual Studio 2008 combination. I have built all the dll with the Visual Studio 2008 with Target cpu as x86 and targert framework as 2.0. I have given in below the codes I have tried with and the errors. I need to create the object for the dll in server machine(64 bit) and access its method from the client(32 bit) Code : Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],false); ABService.Service service1= (ABService.Service)Activator.CreateInstance(svr); strresult = service1.ExecuteService(orequest.xml); Error :{"Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine 10.105.138.64 failed due to the following error: 80040154."} Code Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],true); object Service1 = null; Service1 = (ABService.Service)Activator.CreateInstance(svr, true); strresult = Convert.ToString(ReflectionHelper.Invoke(Service1, "ExecuteService", new object[] { orequest.xml })); Service1 = null; Error: Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine ftpsite failed due to the following error: 80040154. With the below code instead of C#.Net dll if i have the vb dll in Com + the method is executed successfully. Code ords = new RDS.DataSpace(); ords.InternetTimeout = 600000; object M_Service = null; ABService.Service oabservice = null; M_Service = ords.CreateObject("ABService.Service",url); strresult = Convert.ToString(ReflectionHelper.Invoke(M_Service, "ExecuteService", new object[] { orequest.xml })); Error : {"Object doesn't support this property or method 'ExecuteService'"} Code object obj=Interaction.CreateObject("ABService.Service", "10.105.138.64"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: {"Cannot create ActiveX component."} Code object obj = Activator.GetObject(typeof(ABService.Service), @"http://10.105.138.64:80/ABANET"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: InnerException {"The remote server returned an error: (405) Method Not Allowed."} System.Exception {System.Net.WebException} Message "Exception has been thrown by the target of an invocation.”

    Read the article

  • How to do the processing and keep GUI refreshed using databinding?

    - by macias
    History of the problem This is continuation of my previous question How to start a thread to keep GUI refreshed? but since Jon shed new light on the problem, I would have to completely rewrite original question, which would make that topic unreadable. So, new, very specific question. The problem Two pieces: CPU hungry heavy-weight processing as a library (back-end) WPF GUI with databinding which serves as monitor for the processing (front-end) Current situation -- library sends so many notifications about data changes that despite it works within its own thread it completely jams WPF data binding mechanism, and in result not only monitoring the data does not work (it is not refreshed) but entire GUI is frozen while processing the data. The aim -- well-designed, polished way to keep GUI up to date -- I am not saying it should display the data immediately (it can skip some changes even), but it cannot freeze while doing computation. Example This is simplified example, but it shows the problem. XAML part: <StackPanel Orientation="Vertical"> <Button Click="Button_Click">Start</Button> <TextBlock Text="{Binding Path=Counter}"/> </StackPanel> C# part (please NOTE this is one piece code, but there are two sections of it): public partial class MainWindow : Window,INotifyPropertyChanged { // GUI part public MainWindow() { InitializeComponent(); DataContext = this; } private void Button_Click(object sender, RoutedEventArgs e) { var thread = new Thread(doProcessing); thread.IsBackground = true; thread.Start(); } // this is non-GUI part -- do not mess with GUI here public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string property_name) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(property_name)); } long counter; public long Counter { get { return counter; } set { if (counter != value) { counter = value; OnPropertyChanged("Counter"); } } } void doProcessing() { var tmp = 10000.0; for (Counter = 0; Counter < 10000000; ++Counter) { if (Counter % 2 == 0) tmp = Math.Sqrt(tmp); else tmp = Math.Pow(tmp, 2.0); } } } Known workarounds (Please do not repost them as answers) Those two first are based on Jon ideas: pass GUI dispatcher to library and use it for sending notifications -- why it is ugly? because it could be no GUI at all give up with data binding COMPLETELY (one widget with databinding is enough for jamming), and instead check from time to time data and update the GUI manually -- well, I didn't learn WPF just to give up with it now ;-) and this is mine, it is ugly, but simplicity of it kills -- before sending notification freeze a thread -- Thread.Sleep(1) -- to let the potential receiver "breathe" -- it works, it is minimalistic, it is ugly though, and it ALWAYS slows down computation even if no GUI is there So... I am all ears for real solutions, not some tricks.

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • What is fastest way to convert bool to byte?

    - by Amir Rezaei
    What is fastest way to convert bool to byte? I want this mapping: False=0, True=1 Note: I don't want to use any if statement. Update: I don't want to use conditional statement. I don't want the CPU to halt or guess next statement. I want to optimize this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte k; for (int i = 0; i < barray.Length; ++i) { k = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(c); } Update: The length of the array is very large, it's in terabyte order! Therefore I need to do optimization if possible. I shouldn't need to explain my self. The question is still valid. Update: I'm working on a project and looking at others code. That's why I didn't provide with the function at first place. I didn't want to spend time on explaining for people when they have opinion about the code. I shouldn’y need to provide in my question the background of my work, and a function that is not written by me. I have started to optimize it part by part. If I needed help with the whole function I would asked that in another question. That is why I asked this very simple at the beginning. Unfortunately people couldn’t keep themselves to the question. So please if you want to help answer the question. Update: For dose who want to see the point of this question. This example shows how two if statement are reduced from the code. byte A = k > 9 ; //If it was possible (k>9) == 0 || 1 c[i * 2] = A * (k + 0x30) - (A - 1) * (k + 0x30);

    Read the article

  • GCC problem with raw double type comparisons

    - by Monomer
    I have the following bit of code, however when compiling it with GCC 4.4 with various optimization flags I get some unexpected results when its run. #include <iostream> int main() { const unsigned int cnt = 10; double lst[cnt] = { 0.0 }; const double v[4] = { 131.313, 737.373, 979.797, 731.137 }; for(unsigned int i = 0; i < cnt; ++i) { lst[i] = v[i % 4] * i; } for(unsigned int i = 0; i < cnt; ++i) { double d = v[i % 4] * i; if(lst[i] != d) { std::cout << "error @ : " << i << std::endl; return 1; } } return 0; } when compiled with: "g++ -pedantic -Wall -Werror -O1 -o test test.cpp" I get the following output: "error @ : 3" when compiled with: "g++ -pedantic -Wall -Werror -O2 -o test test.cpp" I get the following output: "error @ : 3" when compiled with: "g++ -pedantic -Wall -Werror -O3 -o test test.cpp" I get no errors when compiled with: "g++ -pedantic -Wall -Werror -o test test.cpp" I get no errors I do not believe this to be an issue related to rounding, or epsilon difference in the comparison. I've tried this with Intel v10 and MSVC 9.0 and they all seem to work as expected. I believe this should be nothing more than a bitwise compare. If I replace the if-statement with the following: if (static_cast<long long int>(lst[i]) != static_cast<long long int>(d)), and add "-Wno-long-long" I get no errors in any of the optimization modes when run. If I add std::cout << d << std::endl; before the "return 1", I get no errors in any of the optimization modes when run. Is this a bug in my code, or is there something wrong with GCC and the way it handles the double type?

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • C++ DLL creation for C# project - No functions exported

    - by Yeti
    I am working on a project that requires some image processing. The front end of the program is C# (cause the guys thought it is a lot simpler to make the UI in it). However, as the image processing part needs a lot of CPU juice I am making this part in C++. The idea is to link it to the C# project and just call a function from a DLL to make the image processing part and allow to the C# environment to process the data afterwards. Now the only problem is that it seems I am not able to make the DLL. Simply put the compiler refuses to put any function into the DLL that I compile. Because the project requires some development time testing I have created two projects into a C++ solution. One is for the Dll and another console application. The console project holds all the files and I just include the corresponding header into my DLL project file. I thought the compiler should take out the functions that I marked as to be exported and make the DLL from them. Nevertheless this does not happens. Here it is how I defined the function in the header: extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck); extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI, CvScalar &refHSVColorLow, CvScalar &refHSVColorHi ); Followed by the implementation in the cpp file: extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI,&refHSVColorLow, CvScalar &refHSVColorHi ) { \\... return cvPoint((int)( M10/M00) + imgROI.x, (int)( M01/M00 ) + imgROI.y) ;} extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck) { \\ ...}; And my main file for the DLL project looks like: #ifdef _MANAGED #pragma managed(push, off) #endif /// <summary> Include files. </summary> #include "..\ImageProcessingDebug\ImageProcessingTest.h" #include "..\ImageProcessingDebug\ImageProcessing.h" BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { return TRUE; } #ifdef _MANAGED #pragma managed(pop) #endif Needless to say it does not work. A quick look with DLL export viewer 1.36 reveals that no function is inside the library. I don't get it. What I am doing wrong ? As side not I am using the C++ objects (and here it is the C++ DLL part) such as the vector. However, only for internal usage. These will not appear in the headers of either function as you can observe from the previous code snippets. Any ideas? Thx, Bernat

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • How to interpret kernel panics?

    - by Owen
    Hi all, I'm new to linux kernel and could barely understand how to debug kernel panics. I have this error below and I don't know where in the C code should I start checking. I was thinking maybe I could echo what functions are being called so I could check where in the code is this null pointer dereferenced. What print function should I use ? How do you interpret the error message below? Unable to handle kernel NULL pointer dereference at virtual address 0000000d pgd = c7bdc000 [0000000d] *pgd=4785f031, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT Modules linked in: bcm5892_secdom_fw(P) bcm5892_lcd snd_bcm5892 msr bcm5892_sci bcm589x_ohci_p12 bcm5892_skeypad hx_decoder(P) pinnacle hx_memalloc(P) bcm_udc_dwc scsi_mod g_serial sd_mod usb_storage CPU: 0 Tainted: P (2.6.27.39-WR3.0.2ax_standard #1) PC is at __kmalloc+0x70/0xdc LR is at __kmalloc+0x48/0xdc pc : [c0098cc8] lr : [c0098ca0] psr: 20000093 sp : c7a9fd50 ip : c03a4378 fp : c7a9fd7c r10: bf0708b4 r9 : c7a9e000 r8 : 00000040 r7 : bf06d03c r6 : 00000020 r5 : a0000093 r4 : 0000000d r3 : 00000000 r2 : 00000094 r1 : 00000020 r0 : c03a4378 Flags: nzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user Control: 00c5387d Table: 47bdc008 DAC: 00000015 Process sh (pid: 1088, stack limit = 0xc7a9e260) Stack: (0xc7a9fd50 to 0xc7aa0000) fd40: c7a6a1d0 00000020 c7a9fd7c c7ba8fc0 fd60: 00000040 c7a6a1d0 00000020 c71598c0 c7a9fd9c c7a9fd80 bf06d03c c0098c64 fd80: c71598c0 00000003 c7a6a1d0 bf06c83c c7a9fdbc c7a9fda0 bf06d098 bf06d008 fda0: c7159880 00000000 c7a6a2d8 c7159898 c7a9fde4 c7a9fdc0 bf06d130 bf06d078 fdc0: c79ca000 c7159880 00000000 00000000 c7afbc00 c7a9e000 c7a9fe0c c7a9fde8 fde0: bf06d4b4 bf06d0f0 00000000 c79fd280 00000000 0f700000 c7a9e000 00000241 fe00: c7a9fe3c c7a9fe10 c01c37b4 bf06d300 00000000 c7afbc00 00000000 00000000 fe20: c79cba84 c7463c78 c79fd280 c7473b00 c7a9fe6c c7a9fe40 c00a184c c01c35e4 fe40: 00000000 c7bb0005 c7a9fe64 c79fd280 c7463c78 00000000 c00a1640 c785e380 fe60: c7a9fe94 c7a9fe70 c009c438 c00a164c c79fd280 c7a9fed8 c7a9fed8 00000003 fe80: 00000242 00000000 c7a9feb4 c7a9fe98 c009c614 c009c2a4 00000000 c7a9fed8 fea0: c7a9fed8 00000000 c7a9ff64 c7a9feb8 c00aa6bc c009c5e8 00000242 000001b6 fec0: 000001b6 00000241 00000022 00000000 00000000 c7a9fee0 c785e380 c7473b00 fee0: d8666b0d 00000006 c7bb0005 00000300 00000000 00000000 00000001 40002000 ff00: c7a9ff70 c79b10a0 c79b10a0 00005402 00000003 c78d69c0 ffffff9c 00000242 ff20: 000001b6 c79fd280 c7a9ff64 c7a9ff38 c785e380 c7473b00 00000000 00000241 ff40: 000001b6 ffffff9c 00000003 c7bb0000 c7a9e000 00000000 c7a9ff94 c7a9ff68 ff60: c009c128 c00aa380 4d18b5f0 08000000 00000000 00071214 0007128c 00071214 ff80: 00000005 c0027ee4 c7a9ffa4 c7a9ff98 c009c274 c009c0d8 00000000 c7a9ffa8 ffa0: c0027d40 c009c25c 00071214 0007128c 0007128c 00000241 000001b6 00000000 ffc0: 00071214 0007128c 00071214 00000005 00073580 00000003 000713e0 400010d0 ffe0: 00000001 bef0c7b8 000269cc 4d214fec 60000010 0007128c 00000000 00000000 Backtrace: [] (__kmalloc+0x0/0xdc) from [] (gs_alloc_req+0x40/0x70 [g_serial]) r8:c71598c0 r7:00000020 r6:c7a6a1d0 r5:00000040 r4:c7ba8fc0 [] (gs_alloc_req+0x0/0x70 [g_serial]) from [] (gs_alloc_requests+0x2c/0x78 [g_serial]) r7:bf06c83c r6:c7a6a1d0 r5:00000003 r4:c71598c0 [] (gs_alloc_requests+0x0/0x78 [g_serial]) from [] (gs_start_io+0x4c/0xac [g_serial]) r7:c7159898 r6:c7a6a2d8 r5:00000000 r4:c7159880 [] (gs_start_io+0x0/0xac [g_serial]) from [] (gs_open+0x1c0/0x224 [g_serial]) r9:c7a9e000 r8:c7afbc00 r7:00000000 r6:00000000 r5:c7159880 r4:c79ca000 [] (gs_open+0x0/0x224 [g_serial]) from [] (tty_open+0x1dc/0x314) [] (tty_open+0x0/0x314) from [] (chrdev_open+0x20c/0x22c) [] (chrdev_open+0x0/0x22c) from [] (__dentry_open+0x1a0/0x2b8) r8:c785e380 r7:c00a1640 r6:00000000 r5:c7463c78 r4:c79fd280 [] (__dentry_open+0x0/0x2b8) from [] (nameidata_to_filp+0x38/0x50) [] (nameidata_to_filp+0x0/0x50) from [] (do_filp_open+0x348/0x6f4) r4:00000000 [] (do_filp_open+0x0/0x6f4) from [] (do_sys_open+0x5c/0x170) [] (do_sys_open+0x0/0x170) from [] (sys_open+0x24/0x28) r8:c0027ee4 r7:00000005 r6:00071214 r5:0007128c r4:00071214 [] (sys_open+0x0/0x28) from [] (ret_fast_syscall+0x0/0x2c) Code: e59c4080 e59c8090 e3540000 159c308c (17943103) ---[ end trace be196e7cee3cb1c9 ]--- note: sh[1088] exited with preempt_count 2 process '-/bin/sh' (pid 1088) exited. Scheduling for restart. Welcome to Wind River Linux

    Read the article

  • Search row with highest number of cells in a row with colspan

    - by user593029
    What is the efficient way to search highest number of cells in big table with numerous colspan (merge cells ** colspan should be ignored so in below example highest number of cells is 4 in first row). Is it js/jquery with reg expression or just the loop with bubble sorting. I got one link as below explainig use of regex is it ideal way ... can someone suggest pseudo code for this. High cpu consumption due to a jquery regex patch <table width="156" height="84" border="0" > <tbody> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px" colspan="2"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> </tbody> </table>

    Read the article

  • Basic data alignment question

    - by Broken Logic
    I've been playing around to see how my computer works under the hood. What I'm interested in is seeing is what happens on the stack inside a function. To do this I've written the following toy program: #include <stdio.h> void __cdecl Test1(char a, unsigned long long b, char c) { char c1; unsigned long long b1; char a1; c1 = 'b'; b1 = 4; a1 = 'r'; printf("%d %d - %d - %d %d Total: %d\n", (long)&b1 - (long)&a1, (long)&c1 - (long)&b1, (long)&a - (long)&c1, (long)&b - (long)&a, (long)&c - (long)&b, (long)&c - (long)&a1 ); }; struct TestStruct { char a; unsigned long long b; char c; }; void __cdecl Test2(char a, unsigned long long b, char c) { TestStruct locals; locals.a = 'b'; locals.b = 4; locals.c = 'r'; printf("%d %d - %d - %d %d Total: %d\n", (long)&locals.b - (long)&locals.a, (long)&locals.c - (long)&locals.b, (long)&a - (long)&locals.c, (long)&b - (long)&a, (long)&c - (long)&b, (long)&c - (long)&locals.a ); }; int main() { Test1('f', 0, 'o'); Test2('f', 0, 'o'); return 0; } And this spits out the following: 9 19 - 13 - 4 8 Total: 53 8 8 - 24 - 4 8 Total: 52 The function args are well behaved but as the calling convention is specified, I'd expect this. But the local variables are a bit wonky. My question is, why wouldn't these be the same? The second call seems to produce a more compact and better aligned stack. Looking at the ASM is unenlightening (at least to me), as the variable addresses are still aliased there. So I guess this is really a question about the assembler itself allocates the stack to local variables. I realise that any specific answer is likely to be platform specific. I'm more interested in a general explanation unless this quirk really is platform specific. For the record though, I'm compiling with VS2010 on a 64bit Intel machine.

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Swing: does DefaultBoundedRangeModel coalesce multiple events?

    - by Jason S
    I have a JProgressBar displaying a BoundedRangeModel which is extremely fine grained and I was concerned that updating it too often would slow down my computer. So I wrote a quick test program (see below) which has a 10Hz timer but each timer tick makes 10,000 calls to microtick() which in turn increments the BoundedRangeModel. Yet it seems to play nicely with a JProgressBar; my CPU is not working hard to run the program. How does JProgressBar or DefaultBoundedRangeModel do this? They seem to be smart about how much work it does to update the JProgressBar, so that as a user I don't have to worry about updating the BoundedRangeModel's value. package com.example.test.gui; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.BoundedRangeModel; import javax.swing.DefaultBoundedRangeModel; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JProgressBar; import javax.swing.Timer; public class BoundedRangeModelTest1 extends JFrame { final private BoundedRangeModel brm = new DefaultBoundedRangeModel(); final private Timer timer = new Timer(100, new ActionListener() { @Override public void actionPerformed(ActionEvent arg0) { tick(); } }); public BoundedRangeModelTest1(String title) { super(title); JPanel p = new JPanel(); p.add(new JProgressBar(this.brm)); getContentPane().add(p); this.brm.setMaximum(1000000); this.brm.setMinimum(0); this.brm.setValue(0); } protected void tick() { for (int i = 0; i < 10000; ++i) { microtick(); } } private void microtick() { this.brm.setValue(this.brm.getValue()+1); } public void start() { this.timer.start(); } static public void main(String[] args) { BoundedRangeModelTest1 f = new BoundedRangeModelTest1("BoundedRangeModelTest1"); f.pack(); f.setVisible(true); f.setDefaultCloseOperation(EXIT_ON_CLOSE); f.start(); } }

    Read the article

  • Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

    - by metaleap
    I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.) According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION: "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point "if a geometry shader is active and mode? is incompatible with [...]" -- nope, no geometry shaders as of now. Furthermore: I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway). It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...) For what it's worth, here's the relevant routine excerpted from the render loop, in Golang: func (me *TMesh) render () { curMesh = me curTechnique.OnRenderMesh() gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf) if me.glElemBuf > 0 { gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf) gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil)) gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0) } else { gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) /* BOOM! */ gl.DrawArrays(me.glMode, 0, me.glNumVerts) } gl.BindBuffer(gl.ARRAY_BUFFER, 0) } So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here. What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

    Read the article

  • Suggest an alternative way to organize/build a database solution.

    - by Hamish Grubijan
    We are using Visual Studio 2010, but this was first conceived with VS2003. I will forward the best suggestions to my team. The current setup almost makes me vomit. It is a C# solution with most projects containing .sql files. Because we support Microsoft, Oracle, and Sybase, and so home-brewed a pre-processor, much like C preprocessor, except that substitutions are performed by a home-brewed C# program without using yacc and tools like that. #ifdefs are used for conditional macro definitions, and yeah - macros are the way this is done. A macro can expand to another macro or two, but this should eventually terminate. Only macros have #ifdef in them - the rest of the SQL-like code just uses these macros. Now, the various configurations: Debug, MNDebug, MNRelease, Release, SQL_APPLY_ALL, SQL_APPLY_MSFT, SQL_APPLY_ORACLE, SQL_APPLY_SYBASE, SQL_BUILD_OUTPUT_ALL, SQL_COMPILE, as well as 2 more. Also: Any CPU, Mixed Platforms, Win32. What drives me nuts is having to configure it correctly as well as choosing the right one out of 12 x 3 = 36 configurations as well as having to substitute database name depending on the type of database: config, main, or gateway. I am thinking that configuration should be reduced to just Debug, Release, and SQL_APPLY. Also, using 0, 1, and 2 seems so 80s ... Finally, I think my intention to build or not to build 3 types of databases for 3 types of vendors should be configured with just a tic tac toe board like: XOX OOX XXX In this case it would mean build MSFT+CONFIG, all SYBASE, and all GATEWAY. Still, the overall thing which uses a text file and a pre-processor and many configurations seems incredibly clunky. It is year 2010 now and someone out there is bound to have a very clean and/or creative tool/solution. The only pro would be that the existing collection of macros has been well tested. Have you ever had to write SQL that would work for several vendors? How did you do it? SqlVars.txt (Every one of 30 users makes a copy of a template and modifies this to suit their needs): // This is the default parameters file and should not be changed. // You can overwrite any of these parameters by copying the appropriate // section to override into SqlVars.txt and providing your own information. //Build types are 0-Config, 1-Main, 2-Gateway BUILD_TYPE=1 REMOVE_COMMENTS=1 // Login information used when applying to a Microsoft SQL server database SQL_APPLY_MSFT_version=SQL2005 SQL_APPLY_MSFT_database=msftdb SQL_APPLY_MSFT_server=ABC SQL_APPLY_MSFT_user=msftusr SQL_APPLY_MSFT_password=msftpwd // Login information used when applying to an Oracle database SQL_APPLY_ORACLE_version=ORACLE10g SQL_APPLY_ORACLE_server=oradb SQL_APPLY_ORACLE_user=orausr SQL_APPLY_ORACLE_password=orapwd // Login information used when applying to a Sybase database SQL_APPLY_SYBASE_version=SYBASE125 SQL_APPLY_SYBASE_database=sybdb SQL_APPLY_SYBASE_server=sybdb SQL_APPLY_SYBASE_user=sybusr SQL_APPLY_SYBASE_password=sybpwd ... (THIS GOES ON)

    Read the article

  • Browsers (IE and Firefox) freeze when copying large amount of text

    - by Matt
    I have a web application - a Java servlet - that delivers data to users in the form of a text printout in a browser (text marked up with HTML in order to display in the browser as we want it to). The text does display in different colors, though most of it is black. One typical mode of operation is this: 1. User submits a form to request data. 2. Servlet delivers HTML file to browser. 3. User does CTRL+A to select all the text. 4. User does CTRL+C to copy all the text. 5. User goes to a text editor and does CTRL+V to paste the text. In the testing where I'm having this problem, step #2 successfully loads all the data - we wait for that to complete. We can scroll down to the end of what the browser loaded and see the end of the data. However, the browser freezes on step #3 (Firefox) or on step #4 (IE). Because step #2 finishes, I think it is a browser/memory issue, and not an issue with the web application. If I run queries to deliver smaller amounts of data (but after several queries we get the same data we would have above in one query) and copy/paste this text, the file I save it into ends up being about 8 MB. If I save the browser's displayed HTML to a file on my computer via File-Save As from the browser menu, it works fine and the file is about 22 MB. We've tried this on 2 different computers at work (both running Windows XP, with at least 2 GB of RAM and many GB of free disk space), using Firefox and IE. We also tried it on a home computer from a home network outside of work (thinking it might be our IT security software causing the problem), running Windows 7 using IE, and still had the problem. When I've done this, I can see whatever browser I'm using utilizing the CPU at 50%. Firefox's memory usage grows to about 1 GB; IE's stays in the several hundred MBs. We once let this run for half an hour, and it did not complete. I'm most likely going to modify the web app to have an option of delivering a plain text file for download, and I imagine that will get the users what they need. But for the mean time, and because I'm curious - and I don't like my application freezing people's browsers, does anyone have any ideas about the browser freezing? I understand that sometimes you just reach your memory limit, but 22 MB sounds to me like an amount I should be able to copy to the clipboard.

    Read the article

  • NSUrlconnection problem receiving data from some filehosts

    - by Tammo
    hello again, i am trying to develop an downloadmanager. i can now download files from almost anywhere on linkclick. in the - (BOOL)webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType i check if the url is a url to a binaryfile like a zipfile. than i setup a nsurlconnection NSMutableURLRequest *urlRequest = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringLocalCacheData timeoutInterval:20.0]; [urlRequest setValue:@"User-Agent" forHTTPHeaderField:@"Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko) Safari/419.3"]; NSURLConnection *mainConnection = [NSURLConnection connectionWithRequest:urlRequest delegate:self]; if (nil == mainConnection) { NSLog(@"Could not create the NSURLConnection object"); } (void)connection:(NSURLConnection )connection didReceiveResponse:(NSURLResponse)response { self.tabBarController.selectedIndex=1; [receivedData setLength:0]; percent = 0; localFilename = [[[url2 absoluteString] lastPathComponent] copy]; NSLog(localFilename); NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0] ; NSString *appFile = [documentsDirectory stringByAppendingPathComponent:localFilename]; [[NSFileManager defaultManager] createFileAtPath:appFile contents:nil attributes:nil]; [downloadname setHidden:NO]; [downloadname setText:localFilename]; expectedBytes = [response expectedContentLength]; exp = [response expectedContentLength]; NSLog(@"content-length: %lli Bytes", expectedBytes); file = [[NSFileHandle fileHandleForUpdatingAtPath:appFile] retain]; if (file) { [file seekToEndOfFile]; } } (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { if (file) { [file seekToEndOfFile]; } [file writeData:data]; [receivedData appendData:data]; long long resourceLength = [receivedData length]; float res = [receivedData length]; percent = res/exp; [progress setHidden:NO]; [progress setProgress:percent]; NSLog(@"Remaining: %lli KB", (expectedBytes-resourceLength)/1024); [kbleft setHidden:NO]; [kbleft setText:[NSString stringWithFormat:@"%lli / %lli KB", expectedBytes/1024 ,(resourceLength)/1024]]; } in the connectiondidfinish loading i close the file. all working fine for nearly every hoster except hosters wich have a capture procedure before like filedude.com in the uiwebview i can surf to the downloadpage enter the captcha and get the downloadlink. when i click on it the file will be created in the documentsdir with the filename and the download starts but he dont get any data. every file has 0kb and the NSLog(@"content-length: %lli Bytes", expectedBytes); gives out something like 100-400 byte . can somebody help me solve this problem? kind regards

    Read the article

  • Yet another Memory Leak Issue (memory is still gone when program terminates)- C program on SLES

    - by user1426181
    I run my C program on Suse Linux Enterprise that compresses several thousand large files (between 10MB and 100MB in size), and the program gets slower and slower as the program runs (it's running multi-threaded with 32 threads on a Intel Sandy Bridge board). When the program completes, and it's run again, it's still very slow. When I watch the program running, I see that the memory is being depleted while the program runs, which you would think is just a classic memory leak problem. But, with a normal malloc()/free() mismatch, I would expect all the memory to return when the program terminates. But, most of the memory doesn't get reclaimed when the program completes. The free or top command shows Mem: 63996M total, 63724M used, 272M free when the program is slowed down to a halt, but, after the termination, the free memory only grows back to about 3660M. When the program is rerun, the free memory is quickly used up. The top program only shows that the program, while running, is using at most 4% or so of the memory. I thought that it might be a memory fragmentation problem, but, I built a small test program that simulates all the memory allocation activity in the program (many randomized aspects were built in - size/quantity), and it always returns all the memory upon completion. So, I don't think that's it. Questions: Can there be a malloc()/free() mismatch that will lose memory permanently, i.e. even after the process completes? What other things in a C program (not C++) can cause permanent memory loss, i.e. after the program completes, and even the terminal window closes? Only a reboot brings the memory back. I've read other posts about files not being closed causing problems, but, I don't think I have that problem. Is it valid to be looking at top and free for the memory statistics, i.e. do they accurately describe the memory situation? They do seem to correspond to the slowness of the program. If the program only shows a 4% memory usage, will something like valgrind find this problem?

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >