Search Results

Search found 25401 results on 1017 pages for 'adobe partner program'.

Page 440/1017 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Using an image as a border

    - by Tempname
    I am working on an custom container and I need a border for this container. I have a 15x15 image that I am creating a 9-slice border skin with. The issue that I am having is that the border skin does not appear the way that I had hoped it would. Here is a ss of the skin in place. Ideally I should have a transparent box with a 5 pixel border around it. Here is my current testing code: CSS Code: Box { borderSkin: Embed(source="15x15.png", scaleGridLeft="5", scaleGridTop="5", scaleGridRight="10", scaleGridBottom="10"); } MXML Code: <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Style source="MainTest.css"/> <mx:Box id="tw" width="400" height="400"> </mx:Box> </mx:WindowedApplication>

    Read the article

  • maximum memory which malloc can allocate!

    - by Vikas
    I was trying to figure out how much memory I can malloc to maximum extent on my machine (1 Gb RAM 160 Gb HD Windows platform). I read that maximum memory malloc can allocate is limited to physical memory.(on heap) Also when a program exceeds consumption of memory to a certain level, the computer stops working because other applications do not get enough memory that they require. So to confirm,I wrote a small program in C, int main(){ int *p; while(1){ p=(int *)malloc(4); if(!p)break; } } Hoping that there would be a time when memory allocation will fail and loop will be breaked. But my computer hanged as It was an infinite loop. I waited for about an hour and finally I had to forcely shut down my computer. Some questions: Does malloc allocate memory from HD also? What was the reason for above behaviour? Why didn't loop breaked at any point of time.? Why wasn't there any allocation failure?

    Read the article

  • Beginners question: Variable in Javascript

    - by Andreas
    Hello everybody! I'm new to javascript and cant get this little thing to work. (all external scripts a of course loaded) I have this jQuery script: $('a.Link').click(function(){ autoComp('City'); }); function autoComp(strFormName) { var Company = 'Adobe Apple Microsoft'.split(" "); var City = 'London Paris Berlin'.split(" "); $('#'+strFormName).autocomplete(strFormName) } I cant get it to work. I've discovered that the problem is the last "strFormName" after .autocomplete Appreciate all the help I can get.

    Read the article

  • C ] How can I handle weird errors from calculating acos / sin / atan2?

    - by Phrixus
    Has anyone seen this weird value while handling sin / cos/ tan / acos.. math stuff? ===THE WEIRD VALUE=== -1.#IND00 ===================== void inverse_pos(double x, double y, double& theta_one, double& theta_two) { // Assume that L1 = 350 and L2 = 250 double B = sqrt(x*x + y*y); double angle_beta = atan2(y, x); double angle_alpha = acos((L2*L2 - B*B - L1*L1) / (-2*B*L1)); theta_one = angle_beta + angle_alpha; theta_two = atan2((y-L1*sin(theta_one)), (x-L1*cos(theta_one))); } This is the code I was working on. In a particular condition - like when x & y are 10 & 10, this code stores -1.#IND00 into theta_one & theta_two. It doesn't look like either characters or numbers :( Without a doubt, atan2 / acos / stuff are the problems. But the problem is, try and catch doesn't work either cuz those double variables have successfully stored some values in them. Moreover, the following calculations never complain about it and never break the program! I'm thinking of forcing to use this value somehow and make the entire program crash... So that I can catch this error.. Except for that idea, I have no idea how I should check whether these theta_one and theta_two variables have stored this crazy values. Any good ideas? Thank you in advance..

    Read the article

  • On C++ global operator new: why it can be replaced

    - by Jimmy
    I wrote a small program in VS2005 to test whether C++ global operator new can be overloaded. It can. #include "stdafx.h" #include "iostream" #include "iomanip" #include "string" #include "new" using namespace std; class C { public: C() { cout<<"CTOR"<<endl; } }; void * operator new(size_t size) { cout<<"my overload of global plain old new"<<endl; // try to allocate size bytes void *p = malloc(size); return (p); } int main() { C* pc1 = new C; cin.get(); return 0; } In the above, my definition of operator new is called. If I remove that function from the code, then operator new in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\crt\src\new.cpp gets called. All is good. However, in my opinion, my implementations of operator new does NOT overload the new in new.cpp, it CONFLICTS with it and violates the one-definition rule. Why doesn't the compiler complain about it? Or does the standard say since operator new is so special, one-definition rule does not apply here? Thanks.

    Read the article

  • Loading saved byte array to memory stream causes out of memory exception

    - by user2320861
    At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object. MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray using (MemoryStream stream = new MemoryStream()) { this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp); myDataRow.BackgroundImageByteArray = stream.ToArray(); } Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception: using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray)) { this.Panel.BackgroundImage = Image.FromStream(stream); } The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap? P.S. I've also tried using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length)) { stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here. }

    Read the article

  • Replace without the replace function

    - by Molly Potter
    Assignment: Let X and Y be two words. Find/Replace is a common word processing operation that finds each occurrence of word X and replaces it with word Y in a given document. Your task is to write a program that performs the Find/Replace operation. Your program will prompt the user for the word to be replaced (X), then the substitute word (Y ). Assume that the input document is named input.txt. You must write the result of this Find/Replace operation to a file named output.txt. Lastly, you cannot use the replace() string function built into Python (it would make the assignment much too easy). To test your code, you should modify input.txt using a text editor such as Notepad or IDLE to contain different lines of text. Again, the output of your code must look exactly like the sample output. This is my code: input_data = open('input.txt','r') #this opens the file to read it. output_data = open('output.txt','w') #this opens a file to write to. userStr= (raw_input('Enter the word to be replaced:')) #this prompts the user for a word userReplace =(raw_input('What should I replace all occurences of ' + userStr + ' with?')) #this prompts the user for the replacement word for line in input_data: words = line.split() if userStr in words: output_data.write(line + userReplace) else: output_data.write(line) print 'All occurences of '+userStr+' in input.txt have been replaced by '+userReplace+' in output.txt' #this tells the user that we have replaced the words they gave us input_data.close() #this closes the documents we opened before output_data.close() It won't replace anything in the output file. Help!

    Read the article

  • Handling Corrupted JPEGs in C#

    - by ddango
    We have a process that pulls images from a remote server. Most of the time, we're good to go, the images are valid, we don't timeout, etc. However, every once and awhile we see this error similar to this: Unhandled Exception: System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+. at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderPa rameters encoderParams) at ConsoleApplication1.Program.Main(String[] args) in C:\images\ConsoleApplic ation1\ConsoleApplication1\Program.cs:line 24 After not being able to reproduce it locally, we looked closer at the image, and realized that there were artifacts, making us suspect corruption. Created an ugly little unit test with only the image in question, and was unable to reproduce the error on Windows 7 as was expected. But after running our unit test on Windows Server 2008, we see this error every time. Is there a way to specify non-strictness for jpegs when writing them? Some sort of check/fix we can use? Unit test snippet: var r = ReadFile("C:\\images\\ConsoleApplication1\\test.jpg"); using (var imgStream = new MemoryStream(r)) { using (var ms = new MemoryStream()) { var guid = Guid.NewGuid(); var fileName = "C:\\images\\ConsoleApplication1\\t" + guid + ".jpg"; Image.FromStream(imgStream).Save(ms, ImageFormat.Jpeg); using (FileStream fs = File.Create(fileName)) { fs.Write(ms.GetBuffer(), 0, ms.GetBuffer().Length); } } }

    Read the article

  • Writing iPhone apps on linux - What tools do you need

    - by Sia.G
    Hello. I wanted to know if it's possible to WRITE and COMPILE/TEST iPhone apps on a linux platform. I've been on google for a couple of days now, and people either talk about "Mac OS X only!", or "Develop jailbroke apps on Linux". My dev partner has a mac and has a certificate to sign the apps. I don't have a mac, but I will be doing most of the development. So what I want to do is simply develop/test the app in linux, and when it's finished, simply hand over the code to him, who will then compile the finalized app and sign it ready for submission to the app store. Could anyone tell me what linux tools I would need to accomplish this?

    Read the article

  • Sharing a COM port over TCP

    - by guinness
    What would be a simple design pattern for sharing a COM port over TCP to multiple clients? For example, a local GPS device that could transmit co-ordinates to remote hosts in realtime. So I need a program that would open the serial port and accept multiple TCP connections like: class Program { public static void Main(string[] args) { SerialPort sp = new SerialPort("COM4", 19200, Parity.None, 8, StopBits.One); Socket srv = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); srv.Bind(new IPEndPoint(IPAddress.Any, 8000)); srv.Listen(20); while (true) { Socket soc = srv.Accept(); new Connection(soc); } } } I would then need a class to handle the communication between connected clients, allowing them all to see the data and keeping it synchronized so client commands are received in sequence: class Connection { static object lck = new object(); static List<Connection> cons = new List<Connection>(); public Socket socket; public StreamReader reader; public StreamWriter writer; public Connection(Socket soc) { this.socket = soc; this.reader = new StreamReader(new NetworkStream(soc, false)); this.writer = new StreamWriter(new NetworkStream(soc, true)); new Thread(ClientLoop).Start(); } void ClientLoop() { lock (lck) { connections.Add(this); } while (true) { lock (lck) { string line = reader.ReadLine(); if (String.IsNullOrEmpty(line)) break; foreach (Connection con in cons) con.writer.WriteLine(line); } } lock (lck) { cons.Remove(this); socket.Close(); } } } The problem I'm struggling to resolve is how to facilitate communication between the SerialPort instance and the threads. I'm not certain that the above code is the best way forward, so does anybody have another solution (the simpler the better)?

    Read the article

  • In which year was the date the same as the original year?

    - by Marta
    It's my first question on this site, but I always found this site really useful. What I mean with my question is: you ask the person to give a date (eg. Fill in a date [dd-mm-yyyy]: 16-10-2013) you than have to ask an interval between 2 years (eg. Give an interval [yyyy-yyyy]:1800-2000) When the program runs, it has to show what day of the week the given date is. In this case it was a Wednesday. Than the program has to look in which year, in between the interval, the date 16 October also fell on a Wednesday. So in the end it has to look something like this: Fill in a date: [dd-mm-yyyy]: 16-10-2013 Give an interval [yyyy-yyyy]: 1900-2000 16 October was a wednesday in the following years: 1905 1911 1916 1922 1933 1939 1944 1950 1961 1967 1972 1978 1989 1995 2000 The full date is Wednesday 16 October, 2013 The small (or biggest) problem is, I am not allowed to use the DATE.function in java. If someone can help me with the second part I would be really really happy, cause I have no idea how I am supposed to do this To find out what day of the week the given date falls, I use the Zeller Congruence class Day { Date date; //To grab the month and year form the Date class int day; final static String[] DAYS_OF_WEEK = { "Saturday", "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday" }; public void dayWeekInterval{ //code to grab the interval from interval Class, and doing stuff here } public void dayOfTheWeek { int m = date.getMonth(); int y = date.getYear(); if (m < 3) { m += 12; y -= 1; } int k = y % 100; int j = y / 100; int day = ((q + (((m + 1) * 26) / 10) + k + (k / 4) + (j / 4)) + (5 * j)) % 7; return day; } public string ToString(){ return "" + DAYS_OF_WEEK[day] + day; } }

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Packing an exe + dll into one executable (not .NET)

    - by Bluebird75
    Hi, Is anybody aware of a program that can pack several DLL and a .EXE into one executable. I am not talking about .NET case here, I am talking about general DLLs, some of which I generate in C++, some of others are external DLL I have no control over. My specific case is a python program packaged with py2exe, where I would like to "hide" the other DLL by packing them. The question is general enough though. The things that had a look at: ILMerge: specific to .NET NETZ: specific to .NET UPX: does DLL compression but not multiple DLL + EXE packing FileJoiner: Almost got it. It can pack executable + anything into one exe but when opened, it will launch the default opener for every file that was packed. So, if the user user dlldepend installed, it will launch it (becaues that's the default dll opener). Maybe that's not possible ? Summary of the answers: DLL opening is managed by the OS, so packing DLL into executable means that at some point, they need to be extracted to a place where the OS can find them. No magic bullet. So, what I want is not possible. Unless... We change something in the OS. Thanks Conrad for pointing me to ThinInstall, which virtualise the application and the OS loading mechanism. With ThinInstall, it is possible to pack everything in one exe (DLL, registry settings, ...).

    Read the article

  • Is there an x86 or x64 emulator that passes system calls back to the Windows API?

    - by Chris Lomont
    I want to emulate windows programs (not VM, true emulation) under windows. This would require the emulator to make calls back to the system APIs, but the program itself would be emulated. The reason is I want to change the opcode formats for research purposes. The process should be: Take existing program. Disassemble and then reassemble with my new opcode formats. Put the new format into the PE with a stub calling the emulator and passing the new code. The emulator would have to pass system calls from the emulated side back to windows API calls. I can do all these steps, except I need an open source emulator with the ability to pass the API calls out. I could try Bochs or QEMU, but I think I'd have to add in the system calls, which I could do if needed. I wonder if there is already something closer to what I need. I know I would have to change the instruction decoding in the emulator to match my new formats, but that is a given. Thanks.

    Read the article

  • How do I make VC++'s debugger break on exceptions?

    - by Mason Wheeler
    I'm trying to debug a problem in a DLL written in C that keeps causing access violations. I'm using Visual C++ 2008, but the code is straight C. I'm used to Delphi, where if an exception occurs while running under the debugger, the program will immediately break to the debugger and it will give you a chance to examine the program state. In Visual C++, though, all I get is a message in the Output tab: First-chance exception at blah blah blah: Access violation reading location 0x04410000. No breaks, nothing. It just goes and unwinds the stack until it's back in my Delphi EXE, which recognizes something's wrong and alerts me there, but by that point I've lost several layers of call stack and I don't know what's going on. I've tried other debugging techniques, but whatever it's doing is taking place deep within a nested loop inside a C macro that's getting called more than 500 times, and that's just a bit beyond my skill (or my patience) to trace through. I figure there has to be some way to get the "first-chance" exception to actually give me a "chance" to handle it. There's probably some "break immediately on first-chance exceptions" configuration setting I don't know about, but it doesn't seem to be all that discoverable. Does anyone know where it is and how to enable it?

    Read the article

  • Twitter oauth/request_token failing sometimes

    - by Techpriester
    Hello there. I'm implementing Twitters OAuth for Adobe AIR in Javascript. My problem is, that out of 100 requests to api.twitter.com/oauth/request_token about 30 fail with the usual error message: Failed to validate oauth signature and token The other 70% of requests produce a correct response, so I believe that my algorithm for signing is correct. I've read about invalid timestamps in a lot of forums and mailing lists but that is not the problem. My timestamps are correct. I also checked, if the nonces are unique, so that's not the cause either. Any ideas why this is happening?

    Read the article

  • Is it possible to spoof or reuse VIEWSTATE or detect if it is protected from modification?

    - by Peter Jaric
    Question ASP and ASP.NET web applications use a value called VIEWSTATE in forms. From what I understand, this is used to persist some kind of state on the client between requests to the web server. I have never worked with ASP or ASP.NET and need some help with two questions (and some sub-questions): 1) Is it possible to programmatically spoof/construct a VIEWSTATE for a form? Clarification: can a program look at a form and from that construct the contents of the base64-encoded VIEWSTATE value? 1 a) Or can it always just be left out? 1 b) Can an old VIEWSTATE for a particular form be reused in a later invocation of the same form, or would it just be luck if that worked? 2) I gather from http://msdn.microsoft.com/en-us/library/ms972976.aspx#viewstate_topic12 that it is possible to turn on security so that the VIEWSTATE becomes secure from spoofing. Is it possible for a program to detect that a VIEWSTATE is safeguarded in such a way? 2 a) Is there a one-to-one mapping between the occurrence of EVENTVALIDATION values and secure VIEWSTATEs? Regarding 1) and 2), if yes, can I have a hint about how I would do that? For 2) I am thinking I could base64-decode the value and search for a string that always is found in unencrypted VIEWSTATEs. "First:"? Something else? Background I have made a small tool for detecting and exploiting so called CSRF vulnerabilities. I use it to quickly make proof of concepts of such vulnerabilities that I send to the affected site owners. Quite often I encounter these forms with a VIEWSTATE, and these I don't know if they are secure or not. Edit 1: Clarified question 1 somewhat. Edit 2: Added text in italics.

    Read the article

  • Video editing language

    - by wvd
    Hi folks, My next project will be all about language tools, parsing and such. Because of that reason I've decided to write a simple language which can be used for video editing. So instead of those desktop applications (Sony vegas, Adobe Premiere, ..) it's basically a language where you define the effects and all and it will generate a video for you. Since I've got no experience in this kind of business I need some help. The goal of the project is to create a simple language which is able to do some basic things (such as text fading in, etc). I am looking for articles/projects/blogs/whatever related with this which could help me writing this language. (Note that I don't need articles about language parsing since I'm pretty familar with that, just the video editing part). Thanks, William v. Doorn

    Read the article

  • Can get members, but not count of NSMutableArray

    - by Curyous
    I'm filling an NSMutableArray from a CoreData call. I can get the first object, but when I try to get the count, the app crashes with Program received signal: “EXC_BAD_ACCESS”. How can I get the count? Here's the relevant code - I've put a comment on the line where it crashes. - (void)viewDidLoad { [super viewDidLoad]; managedObjectContext = [[MySingleton sharedInstance] managedObjectContext]; if (managedObjectContext != nil) { charactersRequest = [[NSFetchRequest alloc] init]; charactersEntity = [NSEntityDescription entityForName:@"Character" inManagedObjectContext:managedObjectContext]; [charactersEntity retain]; [charactersRequest setEntity:charactersEntity]; [charactersRequest retain]; NSError *error; characters = [[managedObjectContext executeFetchRequest:charactersRequest error:&error] mutableCopy]; if (characters == nil) { NSLog(@"Did not get results for characters: %@", error.localizedDescription); } else { [characters retain]; NSLog(@"Found some character(s)."); Character* character = (Character *)[characters objectAtIndex:0]; NSLog(@"Name of first one: %@", character.name); NSLog(@"Found %@ character(s).", characters.count); // Crashes on this line with - Program received signal: “EXC_BAD_ACCESS”. } } } And previous declarations from the header file: @interface CrowdViewController : UITableViewController { NSManagedObjectContext *managedObjectContext; NSFetchRequest *charactersRequest; NSEntityDescription *charactersEntity; NSMutableArray *characters; } I'm a bit perplexed and would really appreciate finding out what is going on.

    Read the article

  • png image blurry when loaded onto texture

    - by Chris
    I have created a png image in photoshop with transparencies that I have loaded into and OpenGL program. I have binded it to a texture and in the program the picture looks blurry and I'm not sure why. Loading Code // Texture loading object nv::Image title; // Return true on success if(title.loadImageFromFile("test.png")) { glGenTextures(1, &titleTex); glBindTexture(GL_TEXTURE_2D, titleTex); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, title.getInternalFormat(), title.getWidth(), title.getHeight(), 0, title.getFormat(), title.getType(), title.getLevel(0)); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0f); } else MessageBox(NULL, "Failed to load texture", "End of the world", MB_OK | MB_ICONINFORMATION); Display Code glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glBindTexture(GL_TEXTURE_2D, titleTex); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glTranslatef(-800, 0, 0.0); glColor3f(1,1,1); glBegin(GL_QUADS); glTexCoord2f(0.0, 0.0); glVertex2f(0,0); glTexCoord2f(0.0, 1.0); glVertex2f(0,600); glTexCoord2f(1.0, 1.0); glVertex2f(1600,600); glTexCoord2f(1.0, 0.0); glVertex2f(1600,0); glEnd(); glDisable(GL_BLEND); glDisable(GL_TEXTURE_2D);

    Read the article

  • Usage of VIsual Memory Leak Detector

    - by Yan Cheng CHEOK
    I found a very interesting memory leak detector by using Visual C++. http://www.codeproject.com/KB/applications/visualleakdetector.aspx I try it out, but cannot make it works to detect a memory leak code. I am using MS Visual Studio 2008. Any step I had missed out? #include "stdafx.h" #include "vld.h" #include <iostream> void fun() { new int[1000]; } int _tmain(int argc, _TCHAR* argv[]) { fun(); std::cout << "lead?" << std::endl; getchar(); return 0; } The output when I run in debug mode is : ... ... 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll', Symbols loaded. 'Test.exe': Loaded 'C:\WINDOWS\system32\msvcrt.dll', Symbols loaded (source information stripped). 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC90.DebugCRT_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_f863c71f\msvcp90d.dll', Symbols loaded. 'Test.exe': Loaded 'C:\Program Files\Visual Leak Detector\bin\dbghelp.dll', Symbols loaded (source information stripped). Visual Leak Detector Version 1.9d installed. No memory leaks detected. Visual Leak Detector is now exiting. The program '[5468] Test.exe: Native' has exited with code 0 (0x0).

    Read the article

  • EXEC_BAD_ACCESS error comes in my applicatoin in iphone

    - by Jaimin
    when i print dictionary i got this error.. here my dictTf is mutabledictionay.. when i m in home page i selct few fields and click find. so new view comes with the result.. now i go back and again click find without changing anything.. now comes proper.. now at this moment when i go back it shows this in the dictionay and EXEC_BAD_ACCESS eror comes... Printing description of dictTf: The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (CFShow) will be abandoned. The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (CFShow) will be abandoned. (gdb)

    Read the article

  • C++ Initialize array in constructor EXC_BAD_ACCESS

    - by user890395
    I'm creating a simple constructor and initializing an array: // Construtor Cinema::Cinema(){ // Initalize reservations for(int i = 0; i < 18; i++){ for(int j = 0; j < 12; j++){ setReservation(i, j, 0); } } // Set default name setMovieName("N/A"); // Set default price setPrice(8); } The setReservation function: void Cinema::setReservation(int row, int column, int reservation){ this->reservations[row][column] = reservation; } The setMovieName function: void Cinema::setMovieName(std::string movieName){ this->movieName = movieName; } For some odd reason when I run the program, the setMovieName function gives the following error: "Program Received Signal: EXC_BAD_ACCESS" If I take out the for-loop that initializes the array of reservations, the problem goes away and the movie name is set without any problems. Any idea what I'm doing wrong? This is the Cinema.h file: #ifndef Cinema_h #define Cinema_h class Cinema{ private: int reservations[17][11]; std::string movieName; float price; public: // Construtor Cinema(); // getters/setters int getReservation(int row, int column); int getNumReservations(); std::string getMovieName(); float getPrice(); void setReservation(int row, int column, int reservation); void setMovieName(std::string movieName); void setPrice(float price); }; #endif

    Read the article

  • Memory leaks after using typeinfo::name()

    - by icabod
    I have a program in which, partly for informational logging, I output the names of some classes as they are used (specifically I add an entry to a log saying along the lines of Messages::CSomeClass transmitted to 127.0.0.1). I do this with code similar to the following: std::string getMessageName(void) const { return std::string(typeid(*this).name()); } And yes, before anyone points it out, I realise that the output of typeinfo::name is implementation-specific. According to MSDN The type_info::name member function returns a const char* to a null-terminated string representing the human-readable name of the type. The memory pointed to is cached and should never be directly deallocated. However, when I exit my program in the debugger, any "new" use of typeinfo::name() shows up as a memory leak. If I output the information for 2 classes, I get 2 memory leaks, and so on. This hints that the cached data is never being freed. While this is not a major issue, it looks messy, and after a long debugging session it could easily hide genuine memory leaks. I have looked around and found some useful information (one SO answer gives some interesting information about how typeinfo may be implemented), but I'm wondering if this memory should normally be freed by the system, or if there is something i can do to "not notice" the leaks when debugging. I do have a back-up plan, which is to code the getMessageName method myself and not rely on typeinfo::name, but I'd like to know anyway if there's something I've missed.

    Read the article

  • User input in Perl with IO::Socket

    - by David
    I am trying to make a perl program which allows a user to input the host and the port number of a foreign host to connect to using IO::Socket. It allows me to run the program and input a host and a port but it never connects and says "Could not connect to [host] at c:\users\USER\Documents\code\perl\sql.pl line 18, line 2." What am i doing wrong with this code shown below? And also, how can i have input validation on my host, which can either be a host name or an ip address? Thanks a bunch! Code Below use IO::Socket print "Host to connect to: "; chomp ($host = <STDIN>); print "Port to connect with: "; chomp ($port = <STDIN>); while(($port > 65535) || ($port <= 0)){ print "Port to connect with [Port > 0 < 65535] : "; chomp ($port = <STDIN>); } print "\nConnecting to host $host on port $port\n"; $socket = new IO::Socket::INET ( LocalHost => '$host', LocalPort => '$port', Proto => 'tcp', Listen => 5, Reuse => 1 ); die "Could not connect to $host";

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >