Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 488/555 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • ASP.NET application - Error when trying to connect to a SQL Server 2008 instance

    - by Pablo Dami
    Hi everyone! Despite that I’m a regular reader of this great forum, this is my first post on it. I believe that this community can help me with the following problem that I have. I’m trying to publish an ASP.NET website over an IIS 6.0 (Windows 2003 Server), and I have some troubles trying to connect to the database. Curiously, I have installed another ASP.NET website into the same IIS 6.0 with the same properties and security parameters and can connect without problems with the same database. The application that works fine is almost the same that the one that can’t connect with SQL Server (actually is the same but with several modifications). I’ll enumarate some information related to the problem: S.O: Windows 2003 Server SQL Server Engine: SQL Server 2008 SQL Server accept remote connections? Yes. ASP.NET version: 2.0.50727 The connections via TCP/IP are enabled to the SQL Server instance? Yes. The corresponding user that I have in the connection string, actually exists into the database with the “owner” role? Yes. ORM Tool used: nHibernate I get the following error when I try to run the aplication into the browser: Error while establishing a connection to the server. When connecting to SQL Server 2005, this failure may occur because the default settings SQL Server does not allow remote connections. (provider: Shared Memory Provider, error: 40 - Could not open a connection to SQL Server) In order to isolate the problem, I made some test. For example, using the web app that works fine I can connect without any problema with the database that uses the web app that can’t. With this evidence I concluded that the problem is within the web app and not into the SQL Server instance. I also google it my problem but sadly I can't find nothing usefull to solve it. If someone can help me I’ll appreciate that. Thank you so much for your time!

    Read the article

  • Putting BigDecimal data into HSQLDB test database using DbUnit

    - by Denise
    Hi everyone, I'm using Hibernate JPA in my backend. I am writing a unit test using JUnit and DBUnit to insert a set of data into an in-memory HSQL database. My dataset contains: <order_line order_line_id="1" quantity="2" discount_price="0.3"/> Which maps to an OrderLine Java object where the discount_price column is defined as: @Column(name = "discount_price", precision = 12, scale = 2) private BigDecimal discountPrice; However, when I run my test case and assert that the discount price returned equals 0.3, the assertion fails and says that the stored value is 0. If I change the discount_price in the dataset to be 0.9, it rounds up to 1. I've checked to make sure HSQLDB isn't doing the rounding and it definitely isn't because I can insert an order line object using Java code with a value like 5.3 and it works fine. To me, it seems like DBUtils is for some reason rounding the number I've defined. Is there a way I can force this to not happen? Can anyone explain why it might be doing this? Thanks!

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

  • Python IOError: Not a gzipped file (Gzip and Blowfish Encrypt/Compress)

    - by notbad.jpeg
    I'm having some problems with python's built-in library gzip. Looked through almost every other stack question about it, and none of them seem to work. MY PROBLEM IS THAT WHEN I TRY TO DECOMPRESS I GET THE IOError I'm Getting: Traceback (most recent call last): File "mymodule.py", line 61, in return gz.read() File "/usr/lib/python2.7/gzip.py", line 245, readself._read(readsize) File "/usr/lib/python2.7/gzip.py", line 287, in _readself._read_gzip_header() File "/usr/lib/python2.7/gzip.py", line 181, in _read_gzip_header raise IOError, 'Not a gzipped file'IOError: Not a gzipped file This is my code to send it over SMB, it might not make sense why i do things, but it's normally in a while loop and memory efficient, I just simplified it. buffer = cStringIO.StringIO(output) #output is from a subprocess call small_buffer = cStringIO.StringIO() small_string = buffer.read() #need a string to write to buffer gzip_obj = gzip.GzipFile(fileobj=small_buffer,compresslevel=6, mode='wb') gzip_obj.write(small_string) compressed_str = small_buffer.getvalue() blowfish = Blowfish.new('abcd', Blowfish.MODE_ECB) remainder = '|'*(8 - (len(compressed_str) % 8)) compressed_str += remainder encrypted = blowfish.encrypt(compressed_str) #i send it over smb, then retrieve it later Then this is the code that retrieves it: #buffer is a cStringIO object filled with data from smb retrieval decrypter = Blowfish.new('abcd', Blowfish.MODE_ECB) value = buffer.getvalue() decrypted = decrypter.decrypt(value) buff = cStringIO.StringIO(decrypted) buff.seek(0) gz = gzip.GzipFile(fileobj=buff) return gz.read() Here's the problem return gz.read()

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

  • use startActivityForResult from non-activity

    - by rayman
    Hi, I have MainActivity which is an Activity and other class(which is a simple java class), we`ll call it "SimpleClass". now i want to run from that class the command startActivityForResult. now i though that i could pass that class(SimpleClass), only MainActivity's context, problem is that, u cant run context.startActivityForResult(...); so the only way making SimpleClass to use 'startActivityForResult; is to pass the reference of MainActivity as an Activity variable to the SimpleClass something like that: inside the MainActivity class i create the instance of SimpleClass this way: SimpleClass simpleClass=new SimpleClass(MainActivity.this); now this is how SimpleClass looks like: public Class SimpleClass { Activity myMainActivity; public SimpleClass(Activity mainActivity) { super(); this.myMainActivity=mainActivity; } .... } public void someMethod(...) { myMainActivity.startActivityForResult(...); } now its working, but isnt a proper way of doing this? I`am afraid i could have some memory leaks in the future. thanks. ray.

    Read the article

  • Need to call original function from detoured function

    - by peachykeen
    I'm using Detours to hook into an executable's message function, but I need to run my own code and then call the original code. From what I've seen in the Detours docs, it definitely sounds like that should happen automatically. The original function prints a message to the screen, but as soon as I attach a detour it starts running my code and stops printing. The original function code is roughly: void CGuiObject::AppendMsgToBuffer(classA, unsigned long, unsigned long, int, classB); My function is: void CGuiObject_AppendMsgToBuffer( [same params, with names] ); I know the memory position the original function resides in, so using: DWORD OrigPos = 0x0040592C; DetourAttach( (void*)OrigPos, CGuiObject_AppendMsgToBuffer); gets me into the function. This code works almost perfectly: my function is called with the proper parameters. However, execution leaves my function and the original code is not called. I've tried jmping back in, but that crashes the program (I'm assuming the code Detours moved to fit the hook is responsible for the crash). Edit: I've managed to fix the first issue, with no returning to program execution. By calling the OrigPos value as a function, I'm able to go to the "trampoline" function and from there on to the original code. However, somewhere along the lines the registers are changing and that is causing the program to crash with a segfault as soon as I get back into the original code.

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • Unit testing opaque structure based C API

    - by Nicolas Goy
    I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it. Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it. My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic? For example, let's take a very simple object, an array with a very simple test: WSArray a = WSArrayCreate(); int foo = 5; WSArrayAppendValue(a, &foo); int *bar = WSArrayGetValueAtIndex(a, 0); if(&foo != bar) printf("Eroneous value returned\n"); else printf("Good value returned\n"); WSRelease(a); Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state. In my example, I don't know if some internal state of the array is broken. How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me. My first thought was to include the implementation file into my unit test, linking my unit test statically to my library. For example: #include <WS/WS.h> #include <WS/Collection/Array.c> static void TestArray(void) { WSArray a = WSArrayCreate(); /* Structure members are available because we included Array.c */ printf("%d\n", a->count); } Is that a good idea? Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.

    Read the article

  • ASP.NET MVC Paging for a search form

    - by James Alexander
    I've read several different posts on paging w/ in MVC but none describe a scenario where I have something like a search form and then want to display the results of the search criteria (with paging) beneath the form once the user clicks submit. My problem is that, the paging solution I'm using will create <a href="..."> links that will pass the desired page like so: http://mysite.com/search/2/ and while that's all fine and dandy, I don't have the results of the query being sent to the db in memory or anything so I need to query the DB again. If the results are handled by the POST controller action for /Search and the first page of the data is rendered as such, how do I get the same results (based on the form criteria specified by the user) when the user clicks to move to page 2? Some javascript voodoo? Leverage Session State? Make my GET controller action have the same variables expected by the search criteria (but optional), when the GET action is called, instantiate a FormCollection instance, populate it and pass it to the POST action method (there-by satisfying DRY)? Can someone point me in the right direction for this scenario or provide examples that have been implemented in the past? Thanks!

    Read the article

  • Proper use of HttpRequestInterceptor and CredentialsProvider in doing preemptive authentication with

    - by Preston
    I'm writing an application in Android that consumes some REST services I've created. These web services aren't issuing a standard Apache Basic challenge / response. Instead in the server-side code I'm wanting to interrogate the username and password from the HTTP(S) request and compare it against a database user to make sure they can run that service. I'm using HttpClient to do this and I have the credentials stored on the client after the initial login (at least that's how I see this working). So here is where I'm stuck. Preemptive authenticate under HttpClient requires you to setup an interceptor as a static member. This is the example Apache Components uses. HttpRequestInterceptor preemptiveAuth = new HttpRequestInterceptor() { @Override public void process( final HttpRequest request, final HttpContext context) throws HttpException, IOException { AuthState authState = (AuthState) context.getAttribute(ClientContext.TARGET_AUTH_STATE); CredentialsProvider credsProvider = (CredentialsProvider) context.getAttribute( ClientContext.CREDS_PROVIDER); HttpHost targetHost = (HttpHost) context.getAttribute(ExecutionContext.HTTP_TARGET_HOST); if (authState.getAuthScheme() == null) { AuthScope authScope = new AuthScope(targetHost.getHostName(), targetHost.getPort()); Credentials creds = credsProvider.getCredentials(authScope); if (creds != null) { authState.setAuthScheme(new BasicScheme()); authState.setCredentials(creds); } } } }; So the question would be this. What would the proper use of this be? Would I spin this up as part of the application when the application starts? Pulling the username and password out of memory and then using them to create this CredentialsProvider which is then utilized by the HttpRequestInterceptor? Or is there a way to do this more dynamically?

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • How to delete a QProcess instance correctly?

    - by Kopfschmerzen
    Hi everyone! I have a class looking like this: class FakeRunner : public QObject { Q_OBJECT private: QProcess* proc; public: FakeRunner(); int run() { if (proc) return -1; proc = new QProcess(); QStringList args; QString programName = "fake.exe"; connect(comp, SIGNAL(started()), this, SLOT(procStarted())); connect(comp, SIGNAL(error(QProcess::ProcessError)), this, SLOT(procError(QProcess::ProcessError))); connect(comp, SIGNAL(finished(int, QProcess::ExitStatus)), this, SLOT(procFinished(int, QProcess::ExitStatus))); proc->start(programName, args); return 0; }; private slots: void procStarted() {}; void procFinished(int, QProcess::ExitStatus) {}; void procError(QProcess::ProcessError); } Since "fake.exe" does not exist on my system, proc emits the error() signal. If I handle it like following, my program crashes: void FakeRunner::procError(QProcess::ProcessError rc) { delete proc; proc = 0; } It works well, though, if I don't delete the pointer. So, the question is how (and when) should I delete the pointer to QProcess? I believe I have to delete it to avoid a memory leak. FakeRunner::run() can be invoked many times, so the leak, if there is one, will grow. Thanks!

    Read the article

  • AS3 using PrintJob to print a MovieClip

    - by Chris Waugh
    Hello, I am currently trying to create a function which will allow me to pass in a movieclip and print it. Here is the simplified version of the function: function printMovieClip(clip:MovieClip) { var printJob:PrintJob = new PrintJob(); var numPages:int = 0; var printY:int = 0; var printHeight:Number; if ( printJob.start() ) { /* Resize movie clip to fit within page width */ if (clip.width > printJob.pageWidth) { clip.width = printJob.pageWidth; clip.scaleY = clip.scaleX; } numPages = Math.ceil(clip.height / printJob.pageHeight); /* Add pages to print job */ for (var i:int = 0; i < numPages; i++) { printJob.addPage(clip, new Rectangle(0, printY, printJob.pageWidth, printJob.pageHeight)); printY += printJob.pageHeight; } /* Send print job to printer */ printJob.send(); /* Delete job from memory */ printJob = null; } } printMovieClip( testMC ); Unfortunately this is not working as expected i.e. printing the full width of the Movieclip and doing page breaks on the length. Any help with this would be greatly appreciated. Many thanks, Chris

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • Is there a fast alternative to creating a Texture2D from a Bitmap object in XNA?

    - by Matthew Bowen
    I've looked around a lot and the only methods I've found for creating a Texture2D from a Bitmap are: using (MemoryStream s = new MemoryStream()) { bmp.Save(s, System.Drawing.Imaging.ImageFormat.Png); s.Seek(0, SeekOrigin.Begin); Texture2D tx = Texture2D.FromFile(device, s); } and Texture2D tx = new Texture2D(device, bmp.Width, bmp.Height, 0, TextureUsage.None, SurfaceFormat.Color); tx.SetData<byte>(rgbValues, 0, rgbValues.Length, SetDataOptions.NoOverwrite); Where rgbValues is a byte array containing the bitmap's pixel data in 32-bit ARGB format. My question is, are there any faster approaches that I can try? I am writing a map editor which has to read in custom-format images (map tiles) and convert them into Texture2D textures to display. The previous version of the editor, which was a C++ implementation, converted the images first into bitmaps and then into textures to be drawn using DirectX. I have attempted the same approach here, however both of the above approaches are significantly too slow. To load into memory all of the textures required for a map takes for the first approach ~250 seconds and for the second approach ~110 seconds on a reasonable spec computer. If there is a method to edit the data of a texture directly (such as with the Bitmap class's LockBits method) then I would be able to convert the custom-format images straight into a Texture2D and hopefully save processing time. Any help would be very much appreciated. Thanks

    Read the article

  • Fixing a multi-threaded pycurl crash.

    - by Rook
    If I run pycurl in a single thread everything works great. If I run pycurl in 2 threads python will access violate. The first thing I did was report the problem to pycurl, but the project died about 3 years ago so I'm not holding my breath. My (hackish) solution is to build a 2nd version of pycurl called "pycurl_thread" which will only be used by the 2nd thread. I downloaded the pycurl module from sourceforge and I made a total of 4 line changes. But python is still crashing. My guess is that even though this is a module with a different name (import pycurl_thread), its still sharing memory with the original module (import pycurl). How should I solve this problem? Changes in pycurl.c: initpycurl(void) to initpycurl_thread(void) and m = Py_InitModule3("pycurl", curl_methods, module_doc); to m = Py_InitModule3("pycurl_thread", curl_methods, module_doc); Changes in setup.py: PACKAGE = "pycurl" PY_PACKAGE = "curl" to PACKAGE = "pycurl_thread" PY_PACKAGE = "curl_thread" Here is the seg fault i'm getting. This is happening within the C function do_curl_perform(). *** longjmp causes uninitialized stack frame ***: python2.7 terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x37)[0x7f209421b537] /lib/libc.so.6(+0xff4c9)[0x7f209421b4c9] /lib/libc.so.6(__longjmp_chk+0x33)[0x7f209421b433] /usr/lib/libcurl.so.4(+0xe3a5)[0x7f20931da3a5] /lib/libpthread.so.0(+0xfb40)[0x7f209532eb40] /lib/libc.so.6(__poll+0x53)[0x7f20941f6203] /usr/lib/libcurl.so.4(Curl_socket_ready+0x116)[0x7f2093208876] /usr/lib/libcurl.so.4(+0x2faec)[0x7f20931fbaec] /usr/local/lib/python2.7/dist-packages/pycurl.so(+0x892b)[0x7f209342c92b] python2.7(PyEval_EvalFrameEx+0x58a1)[0x4adf81] python2.7(PyEval_EvalCodeEx+0x891)[0x4af7c1] python2.7(PyEval_EvalFrameEx+0x538b)[0x4ada6b] python2.7(PyEval_EvalFrameEx+0x65f9)[0x4aecd9]

    Read the article

  • Segmentation fault on writing char to char* address

    - by Lukas Dojcak
    hi guys, i've got problem with my little C program. Maybe you could help me. char* shiftujVzorku(char* text, char* pattern, int offset){ char* pom = text; int size = 0; int index = 0; while(*(text + size) != '\0'){ size++; } while(*(pom + index) != '\0'){ if(overVzorku(pom + index, pattern)){ while(*pattern != '\0'){ //vyment *pom s *pom + offset if(pom + index + offset < text + size){ char x = *(pom + index + offset); char y = *(pom + index); int adresa = *(pom + index + offset); *(pom + index + offset) = y; <<<<<< SEGMENTATION FAULT *(pom + index) = x; //*pom = *pom - *(pom + offset); //*(pom + offset) = *(pom + offset) + *pom; //*pom = *(pom + offset) - *pom; } else{ *pom = *pom - *(pom + offset - size); *(pom + offset - size) = *(pom + offset - size) + *pom; *pom = *(pom + offset - size) - *pom; } pattern++; } break; } index++; } return text; } Isn't important what's the programm doing. Mayby there's lot of bugs. But, why do I get SEGMENTATION FAULT (for destination see code) at this line? I'm, trying to write some char value to memory space, with help of address "pom + offset + index". Thanks for everything helpful. :)

    Read the article

  • vector::erase with pointer member

    - by matt
    I am manipulating vectors of objects defined as follow: class Hyp{ public: int x; int y; double wFactor; double hFactor; char shapeNum; double* visibleShape; int xmin, xmax, ymin, ymax; Hyp(int xx, int yy, double ww, double hh, char s): x(xx), y(yy), wFactor(ww), hFactor(hh), shapeNum(s) {visibleShape=0;shapeNum=-1;}; //Copy constructor necessary for support of vector::push_back() with visibleShape Hyp(const Hyp &other) { x = other.x; y = other.y; wFactor = other.wFactor; hFactor = other.hFactor; shapeNum = other.shapeNum; xmin = other.xmin; xmax = other.xmax; ymin = other.ymin; ymax = other.ymax; int visShapeSize = (xmax-xmin+1)*(ymax-ymin+1); visibleShape = new double[visShapeSize]; for (int ind=0; ind<visShapeSize; ind++) { visibleShape[ind] = other.visibleShape[ind]; } }; ~Hyp(){delete[] visibleShape;}; }; When I create a Hyp object, allocate/write memory to visibleShape and add the object to a vector with vector::push_back, everything works as expected: the data pointed by visibleShape is copied using the copy-constructor. But when I use vector::erase to remove a Hyp from the vector, the other elements are moved correctly EXCEPT the pointer members visibleShape that are now pointing to wrong addresses! How to avoid this problem? Am I missing something?

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Why doesn't gcc remove this check of a non-volatile variable?

    - by Thomas
    This question is mostly academic. I ask out of curiosity, not because this poses an actual problem for me. Consider the following incorrect C program. #include <signal.h> #include <stdio.h> static int running = 1; void handler(int u) { running = 0; } int main() { signal(SIGTERM, handler); while (running) ; printf("Bye!\n"); return 0; } This program is incorrect because the handler interrupts the program flow, so running can be modified at any time and should therefore be declared volatile. But let's say the programmer forgot that. gcc 4.3.3, with the -O3 flag, compiles the loop body (after one initial check of the running flag) down to the infinite loop .L7: jmp .L7 which was to be expected. Now we put something trivial inside the while loop, like: while (running) putchar('.'); And suddenly, gcc does not optimize the loop condition anymore! The loop body's assembly now looks like this (again at -O3): .L7: movq stdout(%rip), %rsi movl $46, %edi call _IO_putc movl running(%rip), %eax testl %eax, %eax jne .L7 We see that running is re-loaded from memory each time through the loop; it is not even cached in a register. Apparently gcc now thinks that the value of running could have changed. So why does gcc suddenly decide that it needs to re-check the value of running in this case?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >