Search Results

Search found 3511 results on 141 pages for 'const correctness'.

Page 112/141 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • How do I pass a callback function to sqlite3_exec on iOS 5.1?

    - by John Doh
    I am new to both xcode/iOS/Objective-C and sqlite. I am trying to teach myself the basics - and I would like to use the sqlite3 wrapper "sqlite3_exec" for a select query. For some reason, I can't find a simple example anywhere of someone doing this. Basically, the method has a parameter (the third one) for a callback function: int sqlite3_exec( sqlite3*, /* An open database */ const char *sql, /* SQL to be evaluated */ int (*callback)(void*,int,char**,char**), /* Callback function */ void *, /* 1st argument to callback */ char **errmsg /* Error msg written here */ ); That's fine. I'm no stranger to callbacks. However, I just can't seem to get the syntax down right. I took over one of the view controllers in my iPad (iOS 5.1) xcode (4.3) project, and made the changes shown below: #import "SecondViewController.h" #import "sqlite3.h" #import "AppState.h" @interface SecondViewController () @end @implementation SecondViewController - (int)myCallback:(void *)a_parm argc:(int)argc argv:(char **)argv column:(char **)column { return 0; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. //grab questionnaire names char *sql = "select * from QST2Main order by [Name]"; char *err = nil; sqlite3 *db = [[AppState sharedManager] getgCn]; sqlite3_exec(db, sql, myCallback, nil, &err); } Essentially, I want to run a query when this view first loads, to store some data for later use. But, XCode doesn't like the "myCallback" usage at the bottom there. It says: Undeclared Use of Identifier 'myCallback.' That method is declared in the header file, and I've even tried making it static. Nothing seems to make this error go away. I know I must be doing something fundamentally wrong here, but for the life of me I can't figure out what - I can't even find other code samples in this area that could help me figure out what I'm missing. Many thanks!

    Read the article

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • Use of for_each on map elements

    - by Antonio
    I have a map where I'd like to perform a call on every data type object member function. I yet know how to do this on any sequence but, is it possible to do it on an associative container? The closest answer I could find was this: Boost.Bind to access std::map elements in std::for_each. But I cannot use boost in my project so, is there an STL alternative that I'm missing to boost::bind? If not possible, I thought on creating a temporary sequence for pointers to the data objects and then, call for_each on it, something like this: class MyClass { public: void Method() const; } std::map<int, MyClass> Map; //... std::vector<MyClass*> Vector; std::transform(Map.begin(), Map.end(), std::back_inserter(Vector), std::mem_fun_ref(&std::map<int, MyClass>::value_type::second)); std::for_each(Vector.begin(), Vector.end(), std::mem_fun(&MyClass::Method)); It looks too obfuscated and I don't really like it. Any suggestions?

    Read the article

  • Encode and Decode using UTF-8 in iphone

    - by Ekra
    Hi friends, I wanted an example were in I can encode and then decode the same string using UTF-8. Encode and then Decode means I want to implement the method in 2 area were one can encode it and other is able to decode it. I have seen the API but I didnt get much success:- StringWithCString:encoding: stringWithUTF8String: stringWithCString:(const char *)cString encoding:(NSStringEncoding)enc; =================EDITED================= I have string as "øæ-test-2.txt" . when I am encoding it char *s = "øæ-test-2.txt"; NSString *enc = [NSString stringWithCString:s encoding:NSASCIIStringEncoding]; I am getting "øæ-test-2.txt" as output. Now I want to get back the original string back i.e. "øæ-test-2.txt" +++++++++EDITED+++++++++++++++++++ I am getting "øæ-test-2.txt" from server and I need "øæ-test-2.txt" by decoding it . I am able to get the output from the link below http://www.cafewebmaster.com/online_tools/utf_decode Please try to use the link and u will understand my concern. I need the solution on urgent basis. It would be highly appreciated if anyone can give some hint or tutorial in right direction. Regards

    Read the article

  • valgrind complains doing a very simple strtok in c

    - by monkeyking
    Hi I'm trying to tokenize a string by loading an entire file into a char[] using fread. For some strange reason it is not always working, and valgrind complains in this very small sample program. Given an input like test.txt first second And the following program #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/stat.h> //returns the filesize in bytes size_t fsize(const char* fname){ struct stat st ; stat(fname,&st); return st.st_size; } int main(int argc, char *argv[]){ FILE *fp = NULL; if(NULL==(fp=fopen(argv[1],"r"))){ fprintf(stderr,"\t-> Error reading file:%s\n",argv[1]); return 0; } char buffer[fsize(argv[1])]; fread(buffer,sizeof(char),fsize(argv[1]),fp); char *str = strtok(buffer," \t\n"); while(NULL!=str){ fprintf(stderr,"token is:%s with strlen:%lu\n",str,strlen(str)); str = strtok(NULL," \t\n"); } return 0; } compiling like gcc test.c -std=c99 -ggdb running like ./a.out test.txt thanks

    Read the article

  • Why `A & a = a` is valid?

    - by psaghelyi
    #include <iostream> #include <assert.h> using namespace std; struct Base { Base() : m_member1(1) {} Base(const Base & other) { assert(this != &other); // this should trigger m_member1 = other.m_member1; } int m_member1; }; struct Derived { Derived(Base & base) : m_base(m_base) {} // m_base(base) Base & m_base; }; void main() { Base base; Derived derived(base); cout << derived.m_base.m_member1 << endl; // crashes here } The above example is a synthesized version of a mistyped constructor. I used reference at the class member Derived::m_base because I wanted to make sure that the member will be initialized as the constructor had called. One problem is that nor GCC nor MSVC gives me a warning at m_base(m_base). But the more serious for me is that the assert finds everything fine and the application crashes later (sometimes far away from the mistake). Question: Is there any way to indicate such mistakes?

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • forkpty - socket

    - by Alexxx
    Hi, I'm trying to develop a simple "telnet/server" daemon which have to run a program on a new socket connection. This part working fine. But I have to associate my new process to a pty, because this process have some terminal capabilities (like a readline). The code I've developped is (where socketfd is the new socket file descriptor for the new input connection) : int masterfd, pid; const char *prgName = "..."; char *arguments[10] = ....; if ((pid = forkpty(&masterfd, NULL, NULL, NULL)) < 0) perror("FORK"); else if (pid) return pid; else { close(STDOUT_FILENO); dup2(socketfd, STDOUT_FILENO); close(STDIN_FILENO); dup2(socketfd, STDIN_FILENO); close(STDERR_FILENO); dup2(socketfd, STDERR_FILENO); if (execvp(prgName, arguments) < 0) { perror("execvp"); exit(2); } } With that code, the stdin / stdout / stderr file descriptor of my "prgName" are associated to the socket (when looking with ls -la /proc/PID/fd), and so, the terminal capabilities of this process doesn't work. A test with a connection via ssh/sshd on the remote device, and executing "localy" (under the ssh connection) prgName, show that the stdin/stdout/stderr fd of this process "prgName" are associated to a pty (and so the terminal capabilities of this process are working fine). What I am doing wrong? How to associate my socketfd with the pty (created by forkpty) ? Thank Alex

    Read the article

  • ObjC: Alloc instance of Class and performing selectors on it leads to __CFRequireConcreteImplementat

    - by Arakyd
    Hi, I'm new to Objective-C and I'd like to abstract my database access using a model class like this: @interface LectureModel : NSMutableDictionary { } -(NSString*)title; -(NSDate*)begin; ... @end I use the dictionary methods setValue:forKey: to store attributes and return these in the getters. Now I want to read these models from a sqlite database by using the Class dynamically. + (NSArray*)findBySQL:(NSString*)sql intoModelClass:(Class)modelClass { NSMutableArray* models = [[[NSMutableArray alloc] init] autorelease]; sqlite3* db = sqlite3_open(...); sqlite3_stmt* result = NULL; sqlite3_prepare_v2(db, [sql UTF8String], -1, &result, NULL); while(sqlite3_step(result) == SQLITE_ROW) { id modelInstance = [[modelClass alloc] init]; for (int i = 0; i < sqlite3_column_count(result); ++i) { NSString* key = [NSString stringWithUTF8String:sqlite3_column_name(result, i)]; NSString* value = [NSString stringWithUTF8String:(const char*)sqlite3_column_text(result, i)]; if([modelInstance respondsToSelector:@selector(setValue:forKey:)]) [modelInstance setValue:value forKey:key]; } [models addObject:modelInstance]; } sqlite3_finalize(result); sqlite3_close(db); return models; } Funny thing is, the respondsToSelector: works, but if I try (in the debugger) to step over [modelInstance setValue:value forKey:key], it will throw an exception, and the stacktrace looks like: #0 0x302ac924 in ___TERMINATING_DUE_TO_UNCAUGHT_EXCEPTION___ #1 0x991d9509 in objc_exception_throw #2 0x302d6e4d in __CFRequireConcreteImplementation #3 0x00024d92 in +[DBManager findBySQL:intoModelClass:] at DBManager.m:114 #4 0x0001ea86 in -[FavoritesViewController initializeTableData:] at FavoritesViewController.m:423 #5 0x0001ee41 in -[FavoritesViewController initializeTableData] at FavoritesViewController.m:540 #6 0x305359da in __NSFireDelayedPerform #7 0x302454a0 in CFRunLoopRunSpecific #8 0x30244628 in CFRunLoopRunInMode #9 0x32044c31 in GSEventRunModal #10 0x32044cf6 in GSEventRun #11 0x309021ee in UIApplicationMain #12 0x00002988 in main at main.m:14 So, what's wrong with this? Presumably I'm doing something really stupid and just don't see it... Many thanks in advance for your answers, Arakyd :..

    Read the article

  • Possible mem leak?

    - by LCD Fire
    I'm new to the concept so don't be hard on me. why doesn't this code produce a destructor call ? The names of the classes are self-explanatory. The SString will print a message in ~SString(). It only prints one destructor message. int main(int argc, TCHAR* argv[]) { smart_ptr<SString> smt(new SString("not lost")); new smart_ptr<SString>(new SString("but lost")); return 0; } Is this a memory leak? The impl. for smart_ptr is from here edited: //copy ctor smart_ptr(const smart_ptr<T>& ptrCopy) { m_AutoPtr = new T(ptrCopy.get()); } //overloading = operator smart_ptr<T>& operator=(smart_ptr<T>& ptrCopy) { if(m_AutoPtr) delete m_AutoPtr; m_AutoPtr = new T(*ptrCopy.get()); return *this; }

    Read the article

  • Threadsafe binding with DispatcherObject.CheckAccess()

    - by maffe
    Hi, according to this, I can achieve threadsafety with large overhead. I wrote the following class and use it. It works fine. public abstract class BindingBase : DispatcherObject, INotifyPropertyChanged, INotifyPropertyChanging { private string _displayName; private const string NameDisplayName = "DisplayName"; /// /// The display name for the gui element which bound this instance. It can be used for localization. /// public string DisplayName { get { return _displayName; } set { NotifyPropertyChanging(NameDisplayName); _displayName = value; NotifyPropertyChanged(NameDisplayName); } } protected BindingBase() {} protected BindingBase(string displayName) { DisplayName = displayName; } public event PropertyChangedEventHandler PropertyChanged; public event PropertyChangingEventHandler PropertyChanging; protected void NotifyPropertyChanged(string name) { if (PropertyChanged == null) return; if (CheckAccess()) PropertyChanged.Invoke(this, new PropertyChangedEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanged(name))); } protected void NotifyPropertyChanging(string name) { if (PropertyChanging == null) return; if (CheckAccess()) PropertyChanging.Invoke(this, new PropertyChangingEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanging(name))); } } So is there a reason, why I've never found something like that? Are there any issues I should be aware off? Regards

    Read the article

  • How to create static method that evaluates local static variable once?

    - by Viet
    I have a class with static method which has a local static variable. I want that variable to be computed/evaluated once (the 1st time I call the function) and for any subsequent invocation, it is not evaluated anymore. How to do that? Here's my class: template< typename T1 = int, unsigned N1 = 1, typename T2 = int, unsigned N2 = 0, typename T3 = int, unsigned N3 = 0, typename T4 = int, unsigned N4 = 0, typename T5 = int, unsigned N5 = 0, typename T6 = int, unsigned N6 = 0, typename T7 = int, unsigned N7 = 0, typename T8 = int, unsigned N8 = 0, typename T9 = int, unsigned N9 = 0, typename T10 = int, unsigned N10 = 0, typename T11 = int, unsigned N11 = 0, typename T12 = int, unsigned N12 = 0, typename T13 = int, unsigned N13 = 0, typename T14 = int, unsigned N14 = 0, typename T15 = int, unsigned N15 = 0, typename T16 = int, unsigned N16 = 0> struct GroupAlloc { static const uint32_t sizeClass; static uint32_t getSize() { static uint32_t totalSize = 0; totalSize += sizeof(T1)*N1; totalSize += sizeof(T2)*N2; totalSize += sizeof(T3)*N3; totalSize += sizeof(T4)*N4; totalSize += sizeof(T5)*N5; totalSize += sizeof(T6)*N6; totalSize += sizeof(T7)*N7; totalSize += sizeof(T8)*N8; totalSize += sizeof(T9)*N9; totalSize += sizeof(T10)*N10; totalSize += sizeof(T11)*N11; totalSize += sizeof(T12)*N12; totalSize += sizeof(T13)*N13; totalSize += sizeof(T14)*N14; totalSize += sizeof(T15)*N15; totalSize += sizeof(T16)*N16; totalSize = 8*((totalSize + 7)/8); return totalSize; } };

    Read the article

  • Generic allocator class without variadic templates?

    - by rainer
    I am trying to write a generic allocator class that does not really release an object's memory when it is free()'d but holds it in a queue and returns a previously allocated object if a new one is requested. Now, what I can't wrap my head around is how to pass arguments to the object's constructor when using my allocator (at least without resorting to variadic templates, that is). The alloc() function i came up with looks like this: template <typename... T> inline T *alloc(const &T... args) { T *p; if (_free.empty()) { p = new T(args...); } else { p = _free.front(); _free.pop(); // to call the ctor of T, we need to first call its DTor p->~T(); p = new( p ) T(args...); } return p; } Still, I need the code to be compatible with today's C++ (and older versions of GCC that do not support variadic templates). Is there any other way to go about passing an arbitrary amount of arguments to the objects constructor?

    Read the article

  • Preparing a MySQL INSERT/UPDATE statement with DEFAULT values

    - by Raveren
    Quoting MySQL INSERT manual - same goes for UPDATE: Use the keyword DEFAULT to set a column explicitly to its default value. This makes it easier to write INSERT statements that assign values to all but a few columns, because it enables you to avoid writing an incomplete VALUES list that does not include a value for each column in the table. Otherwise, you would have to write out the list of column names corresponding to each value in the VALUES list. So in short if I write INSERT INTO table1 (column1,column2) values ('value1',DEFAULT); A new row with column2 set as its default value - whatever it may be - is inserted. However if I prepare and execute a statement in PHP: $statement = $pdoObject-> prepare("INSERT INTO table1 (column1,column2) values (?,?)"); $statement->execute(array('value1','DEFAULT')); The new row will contain 'DEFAULT' as its text value - if the column is able to store text values. Now I have written an abstraction layer to PDO (I needed it) and to get around this issue am considering to introduce a const DEFAULT_VALUE = "randomstring"; So I could execute statements like this: $statement->execute(array('value1',mysql::DEFAULT_VALUE)); And then in method that does the binding I'd go through values that are sent to be bound and if some are equal to self::DEFAULT_VALUE, act accordingly. I'm pretty sure there's a better way to do this. Has someone else encountered similar situations?

    Read the article

  • Help me create a Firefox extension (Javascript XPCOM Component)

    - by Johnny Grass
    I've been looking at different tutorials and I know I'm close but I'm getting lost in implementation details because some of them are a little bit dated and a few things have changed since Firefox 3. I have already written the javascript for the firefox extension, now I need to make it into an XPCOM component. This is the functionality that I need: My Javascript file is simple, I have two functions startServer() and stopServer. I need to run startServer() when the browser starts and stopServer() when firefox quits. Edit: I've updated my code with a working solution (thanks to Neil). The following is in MyExtension/components/myextension.js. Components.utils.import("resource://gre/modules/XPCOMUtils.jsm"); const CI = Components.interfaces, CC = Components.classes, CR = Components.results; // class declaration function MyExtension() {} MyExtension.prototype = { classDescription: "My Firefox Extension", classID: Components.ID("{xxxx-xxxx-xxx-xxxxx}"), contractID: "@example.com/MyExtension;1", QueryInterface: XPCOMUtils.generateQI([CI.nsIObserver]), // add to category manager _xpcom_categories: [{ category: "profile-after-change" }], // start socket server startServer: function () { /* socket initialization code */ }, // stop socket server stopServer: function () { /* stop server */ }, observe: function(aSubject, aTopic, aData) { var obs = CC["@mozilla.org/observer-service;1"].getService(CI.nsIObserverService); switch (aTopic) { case "quit-application": this.stopServer(); obs.removeObserver(this, "quit-application"); break; case "profile-after-change": this.startServer(); obs.addObserver(this, "quit-application", false); break; default: throw Components.Exception("Unknown topic: " + aTopic); } } }; var components = [MyExtension]; function NSGetModule(compMgr, fileSpec) { return XPCOMUtils.generateModule(components); }

    Read the article

  • Facade Design Patterns and Subclassing

    - by Code Sherpa
    Hi. I am using a facade design pattern for a C# program. The program basically looks like this... public class Api { #region Constants private const int version = 1; #endregion #region Private Data private XProfile _profile; private XMembership _membership; private XRoles _role; #endregion Private Data public Api() { _membership = new XMembership(); _profile = new XProfile(); _role = new XRoles(); } public int GetUserId(string name) { return _membership.GetIdByName(name); } } Now, as I would like subclass my methods into three categories: Role, Profile, and Member. This will be easier on the developers eye because both Profile and Membership expose a lot of methods that look similar (and a few by Role). For example, getting a user's ID would look like: int _id = Namespace.Api.Member.GetUserId("Henry222"); Can somebody "illustrate" how subclassing should work in this case to achieve the effect I am looking for? Thanks in advance.

    Read the article

  • QT: I've inherited from QTreeView. I've inherited from QStandardItem. How do i Style the items?

    - by San Jacinto
    My Google skills must be failing me today. I've inherited from QTreeView to create a TreeView that stores a QStandardItemModel instead of a QAbstractItemModel. I have also inherited from QStandardItem to create a class to store my data in an item as is necessary. I've successfully inserted my derived QStandardItem into my derived QTreeView's QStandardItemModel. Now the trouble is, I can't figure out how to style it. I know that QTreeView has a setStyleSheet(QString) member, but I can't seem to get it working. It may be as simple as I'm not styling the correct attribute. Any pointers would be appreciated. Thanks. For clarity, here are my class defs. class SurveyTreeItem : public QStandardItem { public: SurveyTreeItem(); SurveyTreeItem( const QString & text ); ~SurveyTreeItem(); }; class StandardItemModelTreeView : public QTreeView { public: StandardItemModelTreeView(QWidget* parent = 0); ~StandardItemModelTreeView(); QStandardItemModel* getStandardItemModel(); }; I've tried the following StyleSheets: StandardTreeView::Item { font: 87 12pt 'Arial Black'; } StandardTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::Item { font: 87 12pt 'Arial Black'; } QTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; } StandardTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; }

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • Vector of pointers to base class, odd behaviour calling virtual functions

    - by Ink-Jet
    I have the following code #include <iostream> #include <vector> class Entity { public: virtual void func() = 0; }; class Monster : public Entity { public: void func(); }; void Monster::func() { std::cout << "I AM A MONSTER" << std::endl; } class Buddha : public Entity { public: void func(); }; void Buddha::func() { std::cout << "OHMM" << std::endl; } int main() { const int num = 5; // How many of each to make std::vector<Entity*> t; for(int i = 0; i < num; i++) { Monster m; Entity * e; e = &m; t.push_back(e); } for(int i = 0; i < num; i++) { Buddha b; Entity * e; e = &b; t.push_back(e); } for(int i = 0; i < t.size(); i++) { t[i]->func(); } return 0; } However, when I run it, instead of each class printing out its own message, they all print the "Buddha" message. I want each object to print its own message: Monsters print the monster message, Buddhas print the Buddha message. What have I done wrong?

    Read the article

  • Memory leak problem. iPhone SDK

    - by user326375
    Hello, i've got a problem, i cannot solve it, just recieving error: Program received signal: “0”. The Debugger has exited due to signal 10 (SIGBUS).The Debugger has exited due to signal 10 (SIGBUS). Here is some method, if i comment it out, problem goes aways - (void)loadTexture { const int num_tex = 10; glGenTextures(num_tex, &textures[0]); //TEXTURE #1 textureImage[0] = [UIImage imageNamed:@"wonder.jpg"].CGImage; //TEXTURE #2 textureImage[1] = [UIImage imageNamed:@"wonder.jpg"].CGImage; //TEXTURE #3 textureImage[2] = [UIImage imageNamed:@"wall_eyes.jpg"].CGImage; //TEXTURE #4 textureImage[3] = [UIImage imageNamed:@"wall.jpg"].CGImage; //TEXTURE #5 textureImage[4] = [UIImage imageNamed:@"books.jpg"].CGImage; //TEXTURE #6 textureImage[5] = [UIImage imageNamed:@"bush.jpg"].CGImage; //TEXTURE #7 textureImage[6] = [UIImage imageNamed:@"mushroom.jpg"].CGImage; //TEXTURE #8 textureImage[7] = [UIImage imageNamed:@"roots.jpg"].CGImage; //TEXTURE #9 textureImage[8] = [UIImage imageNamed:@"roots.jpg"].CGImage; //TEXTURE #10 textureImage[9] = [UIImage imageNamed:@"clean.jpg"].CGImage; for(int i=0; i<num_tex; i++) { NSInteger texWidth = CGImageGetWidth(textureImage[i]); NSInteger texHeight = CGImageGetHeight(textureImage[i]); GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4); CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth * 4, CGImageGetColorSpace(textureImage[i]), kCGImageAlphaPremultipliedLast); CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage[i]); CGContextRelease(textureContext); glBindTexture(GL_TEXTURE_2D, textures[i]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); free(textureData); } } anyone can help me with releasing/deleting objects in this method? Thanks.

    Read the article

  • How to invalidate cache when benchmarking?

    - by Michael Buen
    I have this code, that when swapping the order of UsingAs and UsingCast, their performance also swaps. using System; using System.Diagnostics; using System.Linq; using System.IO; class Test { const int Size = 30000000; static void Main() { object[] values = new MemoryStream[Size]; UsingAs(values); UsingCast(values); Console.ReadLine(); } static void UsingCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = (MemoryStream)o; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } static void UsingAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = o as MemoryStream; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } } Outputs: As: 0 : 322 Cast: 0 : 281 When doing this... UsingCast(values); UsingAs(values); ...Results to this: Cast: 0 : 322 As: 0 : 281 When doing just this... UsingAs(values); ...Results to this: As: 0 : 322 When doing just this: UsingCast(values); ...Results to this: Cast: 0 : 322 Aside from running them independently, how to invalidate the cache so the second code being benchmarked won't receive the cached memory of first code? Benchmarking aside, just loved the fact that modern processors do this caching magic :-)

    Read the article

  • Issue with clipping rectangles and back to front rendering

    - by Milo
    Here is my problem. My rendering algorithm renders from back to front. But logically, clipping rectangles need to be applied from front to back. Hence why the following does not work: void AguiWidgetManager::recursiveRender(const AguiWidget *root) { //recursively calls itself to render widgets from back to front AguiWidget* nonConstRoot = (AguiWidget*)root; if(!nonConstRoot->isVisable()) { return; } //push the clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->pushClippingRect(nonConstRoot->getClippingRectangle()); } if(nonConstRoot->isEnabled()) { nonConstRoot->paint(AguiPaintEventArgs(true,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRender(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRender(*it); } } else { nonConstRoot->paint(AguiPaintEventArgs(false,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } } //release clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->popClippingRect(); } } I could ofcourse go to the top of the tree, then apply clipping rectangles inward until I get to the currently rendered widget, but that would involve lots of clipping rectangles @ 60 frames per second. I want to minimize calls to pushing and popping rectangles. What could I do, Thanks

    Read the article

  • Output iterator's value_type

    - by wilhelmtell
    The STL commonly defines an output iterator like so: template<class Cont> class insert_iterator : public iterator<output_iterator_tag,void,void,void,void> { // ... Why do output iterators define value_type as void? It would be useful for an algorithm to know what type of value it is supposed to output. For example, a function that translates a URL query "key1=value1&key2=value2&key3=value3" into any container that holds key-value strings elements. template<typename Ch,typename Tr,typename Out> void parse(const std::basic_string<Ch,Tr>& str, Out result) { std::basic_string<Ch,Tr> key, value; // loop over str, parse into p ... *result = typename iterator_traits<Out>::value_type(key, value); } The SGI reference page of value_type hints this is because it's not possible to dereference an output iterator. But that's not the only use of value_type: I might want to instantiate one in order to assign it to the iterator.

    Read the article

  • How to properly implement cheat codes?

    - by Axarydax
    Hi, what would be the best way to implement kind of cheat codes in general? I have WinForms application in mind, where a cheat code would unlock an easter egg, but the implementation details are not relevant. The best approach that comes to my mind is to keep index for each code - let's consider famous DOOM codes - IDDQD and IDKFA, in a fictional C# app. string[] CheatCodes = { "IDDQD", "IDKFA"}; int[] CheatIndexes = { 0, 0 }; const int CHEAT_COUNT = 2; void KeyPress(char c) { for (int i = 0; i < CHEAT_COUNT; i++) //for each cheat code { if (CheatCodes[i][CheatIndexes[i]] == c) { //we have hit the next key in sequence if (++CheatIndexes[i] == CheatCodes[i].Length) //are we in the end? { //Do cheat work MessageBox.Show(CheatCodes[i]); //reset cheat index so we can enter it next time CheatIndexes[i] = 0; } } else //mistyped, reset cheat index CheatIndexes[i] = 0; } } Is this the right way to do it?

    Read the article

  • how to know location of return address on stack c/c++

    - by Dr Deo
    i have been reading about a function that can overwrite its return address. void foo(const char* input) { char buf[10]; //What? No extra arguments supplied to printf? //It's a cheap trick to view the stack 8-) //We'll see this trick again when we look at format strings. printf("My stack looks like:\n%p\n%p\n%p\n%p\n%p\n% p\n\n"); //%p ie expect pointers //Pass the user input straight to secure code public enemy #1. strcpy(buf, input); printf("%s\n", buf); printf("Now the stack looks like:\n%p\n%p\n%p\n%p\n%p\n%p\n\n"); } It was sugggested that this is how the stack would look like Address of foo = 00401000 My stack looks like: 00000000 00000000 7FFDF000 0012FF80 0040108A <-- We want to overwrite the return address for foo. 00410EDE Question: -. Why did the author arbitrarily choose the second last value as the return address of foo()? -. Are values added to the stack from the bottom or from the top? apart from the function return address, what are the other values i apparently see on the stack? ie why isn't it filled with zeros Thanks.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >