Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 610/844 | < Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >

  • Python bindings for a vala library

    - by celil
    I am trying to create python bindings to a vala library using the following IBM tutorial as a reference. My initial directory has the following two files: test.vala using GLib; namespace Test { public class Test : Object { public int sum(int x, int y) { return x + y; } } } test.override %% headers #include <Python.h> #include "pygobject.h" #include "test.h" %% modulename test %% import gobject.GObject as PyGObject_Type %% ignore-glob *_get_type %% and try to build the python module source test_wrap.c using the following code build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c However, the last command fails with an error $ ./build.sh Traceback (most recent call last): File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1720, in <module> sys.exit(main(sys.argv)) File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1672, in main o = override.Overrides(arg) File "/usr/share/pygobject/2.0/codegen/override.py", line 52, in __init__ self.handle_file(filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 84, in handle_file self.__parse_override(buf, startline, filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 96, in __parse_override command = words[0] IndexError: list index out of range Is this a bug in pygobject, or is something wrong with my setup? What is the best way to call code written in vala from python? EDIT: Removing the extra line fixed the current problem, but now as I proceed to build the python module, I am facing another problem. Adding the following C file to the existing two in the directory: test_module.c #include <Python.h> void test_register_classes (PyObject *d); extern PyMethodDef test_functions[]; DL_EXPORT(void) inittest(void) { PyObject *m, *d; init_pygobject(); m = Py_InitModule("test", test_functions); d = PyModule_GetDict(m); test_register_classes(d); if (PyErr_Occurred ()) { Py_FatalError ("can't initialise module test"); } } and building with the following script build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c CFLAGS="`pkg-config --cflags pygobject-2.0` -I/usr/include/python2.6/ -I." LDFLAGS="`pkg-config --libs pygobject-2.0`" gcc $CFLAGS -fPIC -c test.c gcc $CFLAGS -fPIC -c test_wrap.c gcc $CFLAGS -fPIC -c test_module.c gcc $LDFLAGS -shared test.o test_wrap.o test_module.o -o test.so python -c 'import test; exit()' results in an error: $ ./build.sh ***INFO*** The coverage of global functions is 100.00% (1/1) ***INFO*** The coverage of methods is 100.00% (1/1) ***INFO*** There are no declared virtual proxies. ***INFO*** There are no declared virtual accessors. ***INFO*** There are no declared interface proxies. Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: ./test.so: undefined symbol: init_pygobject Where is the init_pygobject symbol defined? What have I missed linking to?

    Read the article

  • EXC_BAD_ACCESS in CFAttributedStringSetAttribute and NSNumber?

    - by RichardR
    Hi all, I am getting an infuriating EXC_BAD_ACCESS error in an objective c app I am working on. Any help you could offer would be much appreciated. I have tried the normal debug methods for this error (turning on NSZombieEnabled, checking retain/release/autorelease to make sure I'm not trying to access a deallocated object, etc.) and it hasn't seemed to help. Basically, the error always occurs in this function: ` void op_TJ(CGPDFScannerRef scanner, void *info) { PDFPage *self = info; CGPDFArrayRef array; NSMutableString *tempString = [NSMutableString stringWithCapacity:1]; NSMutableArray *kernArray = [[NSMutableArray alloc] initWithCapacity:1]; if(!CGPDFScannerPopArray(scanner, &array)) { [kernArray release]; return; } for(size_t n = 0; n < CGPDFArrayGetCount(array); n += 2) { if(n >= CGPDFArrayGetCount(array)) continue; CGPDFStringRef pdfString; // if we get a PDF string if (CGPDFArrayGetString(array, n, &pdfString)) { //get the actual string const unsigned char *charstring = CGPDFStringGetBytePtr(pdfString); //add this string to our temp string [tempString appendString:[NSString stringWithCString:(const char*)charstring encoding:[self pageEncoding]]]; //NSLog(@"string: %@", tempString); //get the space after this string CGPDFReal r = 0; if (n+1 < CGPDFArrayGetCount(array)) { CGPDFArrayGetNumber(array, n+1, &r); // multiply by the font size CGFloat k = r; k = -k/1000 * self.tmatrix.a * self.fontSize; CGFloat kKern = self.kern * self.tmatrix.a; k = k + kKern; // add the location and kern to the array NSNumber *tempKern = [NSNumber numberWithFloat:k]; NSLog(@"tempKern address: %p", tempKern); [kernArray addObject:[NSArray arrayWithObjects:[NSNumber numberWithInt:[tempString length] - 1], tempKern, nil]]; } } } // create an attribute string CFMutableAttributedStringRef attString = CFAttributedStringCreateMutable(kCFAllocatorDefault, 10); CFAttributedStringReplaceString(attString, CFRangeMake(0, 0), (CFStringRef)tempString); //apply overall kerning NSNumber *tkern = [NSNumber numberWithFloat:self.kern * self.tmatrix.a * self.fontSize]; CFAttributedStringSetAttribute(attString, CFRangeMake(0, CFAttributedStringGetLength(attString)), kCTKernAttributeName, (CFNumberRef)tkern); //apply individual kern attributes for (NSArray *kernLoc in kernArray) { NSLog(@"kern location: %i, %i", [[kernLoc objectAtIndex:0] intValue],[[kernLoc objectAtIndex:1] floatValue]); CFAttributedStringSetAttribute(attString, CFRangeMake([[kernLoc objectAtIndex:0] intValue], 1), kCTKernAttributeName, (CFNumberRef)[kernLoc objectAtIndex:1]); } CFAttributedStringReplaceAttributedString([self cfAttString], CFRangeMake(CFAttributedStringGetLength([self cfAttString]), 0), attString); //release CFRelease(attString); [kernArray release]; } ` The program always crashes because of line CFAttributedStringSetAttribute(attString, CFRangeMake([[kernLoc objectAtIndex:0] intValue], 1), kCTKernAttributeName, (CFNumberRef)[kernLoc objectAtIndex:1]) And it seems to depend on a few things: if [kernLoc objectAtIndex:1] refers to an [NSNumber numberWithFloat:k] where k = 0 (in other words, if k = 0 above where I populate kernArray) then the program crashes almost immediately If I comment out the line k = k + kKern, it takes longer for the program to crash, but does eventually (why would the crash depend on this value?) If I change the length of CFRangeMake from 1 to 0, it takes a lot longer for the program to crash, but still eventually does. (I don't think I am trying to access beyond the bounds of attString, but am I missing something?) When it crashes, I get something similar to: #0 0x942c7ed7 in objc_msgSend () #1 0x00000013 in ?? () #2 0x0285b827 in CFAttributedStringSetAttribute () #3 0x0000568f in op_TJ (scanner=0x472a590, info=0x4a32320) at /Users/Richard/Desktop/AppTest/PDFHighlight 2/PDFScannerOperators.m:251 Any ideas? It seems like somewhere along the way I am overwriting memory or trying to access memory that has been changed, but I have no idea. If there's anymore information I can provide, please let me know. Thanks, Richard

    Read the article

  • Interactive Data Language, IDL: Does anybody care?

    - by Alex
    Anyone use a language called Interactive Data Language, IDL? It is popular with scientists. I think it is a poor language because it is proprietary (every terminal running it has to have an expensive license purchased) and it has minimal support (try searching for IDL, the language, right now on stack) . I am trying to convince my colleagues to stop using it and learn C/C++/Python/Fortran/Java/Ruby. Does anybody know about or even care about IDL enough to have opinions on it? What do you think of it? Should I tell my colleagues to stop wasting their time on it now? How can I convince them? Edit: People are getting the impression that I don't know or use IDL. Also, I said IDL has minimal support which is true in one sense, so I must clarify that the scientific libraries are indeed large. I use IDL all the time, but this is exactly the problem: I am only using IDL because colleagues use it. There is a file format IDL uses, the .sav, which can only be opened in IDL. So I must use IDL to work with this data and transfer the data back to colleagues, but I know I would be more efficient in another language. This is like someone sending you a microsoft word file in an email attachment and if you don't understand how wrong that is then you probably write too many words not enough code and you bought microsoft word. Edit: As an alternative to IDL Python is popular. Here is a list of The Pros of IDL (and the cons) from AstroBetter: Pros of IDL Mature many numerical and astronomical libraries available Wide astronomical user base Numerical aspect well integrated with language itself Many local users with deep experience Faster for small arrays Easier installation Good, unified documentation Standard GUI run/debug tool (IDLDE) Single widget system (no angst about which to choose or learn) SAVE/RESTORE capability Use of keyword arguments as flags more convenient Cons of IDL Narrow applicability, not well suited to general programming Slower for large arrays Array functionality less powerful Table support poor Limited ability to extend using C or Fortran, such extensions hard to distribute and support Expensive, sometimes problem collaborating with others that don’t have or can’t afford licenses. Closed source (only RSI can fix bugs) Very awkward to integrate with IRAF tasks Memory management more awkward Single widget system (useless if working within another framework) Plotting: Awkward support for symbols and math text Many font systems, portability issues (v5.1 alleviates somewhat) not as flexible or as extensible plot windows not intrinsically interactive (e.g., pan & zoom) Pros of Python Very general and powerful programming language, yet easy to learn. Strong, but optional, Object Oriented programming support Very large user and developer community, very extensive and broad library base Very extensible with C, C++, or Fortran, portable distribution mechanisms available Free; non-restrictive license; Open Source Becoming the standard scripting language for astronomy Easy to use with IRAF tasks Basis of STScI application efforts More general array capabilities Faster for large arrays, better support for memory mapping Many books and on-line documentation resources available (for the language and its libraries) Better support for table structures Plotting framework (matplotlib) more extensible and general Better font support and portability (only one way to do it too) Usable within many windowing frameworks (GTK, Tk, WX, Qt…) Standard plotting functionality independent of framework used plots are embeddable within other GUIs more powerful image handling (multiple simultaneous LUTS, optional resampling/rescaling, alpha blending, etc) Support for many widget systems Strong local influence over capabilities being developed for Python Cons of Python More items to install separately Not as well accepted in astronomical community (but support clearly growing) Scientific libraries not as mature: Documentation not as complete, not as unified Not as deep in astronomical libraries and utilities Not all IDL numerical library functions have corresponding functionality in Python Some numeric constructs not quite as consistent with language (or slightly less convenient than IDL) Array indexing convention “backwards” Small array performance slower No standard GUI run/debug tool Support for many widget systems (angst regarding which to choose) Current lack of function equivalent to SAVE/RESTORE in IDL matplotlib does not yet have equivalents for all IDL 2-D plotting capability (e.g., surface plots) Use of keyword arguments used as flags less convenient Plotting: comparatively immature, still much development going on missing some plot type (e.g., surface) 3-d capability requires VTK (though matplotlib has some basic 3-d capability)

    Read the article

  • 3D game engine for networked world simulation / AI sandbox

    - by Martin
    More than 5 years ago I was playing with DirectSound and Direct3D and I found it really exciting although it took much time to get some good results with C++. I was a college student then. Now I have mostly enterprise development experience in C# and PHP, and I do it for living. There is really no chance to earn money with serious game development in our country. Each day more and more I find that I miss something. So I decided to spend an hour or so each day to do programming for fun. So my idea is to build a world simulation. I would like to begin with something simple - some human-like creatures that live their life - like Sims 3 but much more simple, just basic needs, basic animations, minimum graphic assets - I guess it won't be a city but just a large house for a start. The idea is to have some kind of a server application which stores the world data in MySQL database, and some client applications - body-less AI bots which simulate movement and some interactions with the world and each other. But it wouldn't be fun without 3D. So there are also 3D clients - I can enter that virtual world and see the AI bots living. When the bot enters visible area, it becomes material - loads a mesh and animations, so I can see it. When I leave, the bots lose their 3d mesh bodies again, but their virtual life still continues. With time I hope to make it like some expandable scriptable sandbox to experiment with various AI algorithms and so on. But I am not intended to create a full-blown MMORPG :D I have looked for many possible things I would need (free and open source) and now I have to make a choice: OGRE3D + enet (or RakNet). Old good C++. But won't it slow me down so much that I won't have fun any more? CrystalSpace. Formally not a game engine but very close to that. C++ again. MOgre (OGRE3D wrapper for .NET) + lidgren (networking library which is already used in some gaming projects). Good - I like C#, it is good for fast programming and also can be used for scripting. XNA seems just a framework, not an engine, so really have doubts, should I even look at XNA Game Studio :( Panda3D - full game engine with positive feedback. I really like idea to have all the toolset in one package, it has good reviews as a beginner-friendly engine...if you know Python. On the C++ side, Panda3D has almost non-existent documentation. I have 0 experience with Python, but I've heard it is easy to learn. And if it will be fun and challenging then I guess I would benefit from experience in one more programming language. Which of those would you suggest, not because of advanced features or good platform support but mostly for fun, easy workflow and expandability, and so I can create and integrate all the components I need - the server with the database, AI bots and a 3D client application?

    Read the article

  • Persistence classes in Qt

    - by zarzych
    Hi, I'm porting a medium-sized CRUD application from .Net to Qt and I'm looking for a pattern for creating persistence classes. In .Net I usually created abstract persistence class with basic methods (insert, update, delete, select) for example: public class DAOBase<T> { public T GetByPrimaryKey(object primaryKey) {...} public void DeleteByPrimaryKey(object primaryKey) {...} public List<T> GetByField(string fieldName, object value) {...} public void Insert(T dto) {...} public void Update(T dto) {...} } Then, I subclassed it for specific tables/DTOs and added attributes for DB table layout: [DBTable("note", "note_id", NpgsqlTypes.NpgsqlDbType.Integer)] [DbField("note_id", NpgsqlTypes.NpgsqlDbType.Integer, "NoteId")] [DbField("client_id", NpgsqlTypes.NpgsqlDbType.Integer, "ClientId")] [DbField("title", NpgsqlTypes.NpgsqlDbType.Text, "Title", "")] [DbField("body", NpgsqlTypes.NpgsqlDbType.Text, "Body", "")] [DbField("date_added", NpgsqlTypes.NpgsqlDbType.Date, "DateAdded")] class NoteDAO : DAOBase<NoteDTO> { } Thanks to .Net reflection system I was able to achieve heavy code reuse and easy creation of new ORMs. The simplest way to do this kind of stuff in Qt seems to be using model classes from QtSql module. Unfortunately, in my case they provide too abstract an interface. I need at least transactions support and control over individual commits which QSqlTableModel doesn't provide. Could you give me some hints about solving this problem using Qt or point me to some reference materials? Update: Based on Harald's clues I've implemented a solution that is quite similar to the .Net classes above. Now I have two classes. UniversalDAO that inherits QObject and deals with QObject DTOs using metatype system: class UniversalDAO : public QObject { Q_OBJECT public: UniversalDAO(QSqlDatabase dataBase, QObject *parent = 0); virtual ~UniversalDAO(); void insert(const QObject &dto); void update(const QObject &dto); void remove(const QObject &dto); void getByPrimaryKey(QObject &dto, const QVariant &key); }; And a generic SpecializedDAO that casts data obtained from UniversalDAO to appropriate type: template<class DTO> class SpecializedDAO { public: SpecializedDAO(UniversalDAO *universalDao) virtual ~SpecializedDAO() {} DTO defaultDto() const { return DTO; } void insert(DTO dto) { dao->insert(dto); } void update(DTO dto) { dao->update(dto); } void remove(DTO dto) { dao->remove(dto); } DTO getByPrimaryKey(const QVariant &key); }; Using the above, I declare the concrete DAO class as following: class ClientDAO : public QObject, public SpecializedDAO<ClientDTO> { Q_OBJECT public: ClientDAO(UniversalDAO *dao, QObject *parent = 0) : QObject(parent), SpecializedDAO<ClientDTO>(dao) {} }; From within ClientDAO I have to set some database information for UniversalDAO. That's where my implementation gets ugly because I do it like this: QMap<QString, QString> fieldMapper; fieldMapper["client_id"] = "clientId"; fieldMapper["name"] = "firstName"; /* ...all column <-> field pairs in here... */ dao->setFieldMapper(fieldMapper); dao->setTable("client"); dao->setPrimaryKey("client_id"); I do it in constructor so it's not visible at a first glance for someone browsing through the header. In .Net version it was easy to spot and understand. Do you have some ideas how I could make it better?

    Read the article

  • Postmortem debugging with WinDBG.

    - by Drazar
    I have an WCF-service running on an server, and occasionally(1-2 times every month) it throws an COMException with the informative message ”Unknown error (0x8005008)”. When i googled for this particular error I only got threads about problems when creating virtual directories in IIS. And the source code hasn’t anything with making a virtual directory in IIS. DirectoryServiceLib.LdapProvider.Directory - CreatePost - Could not create employee for 195001010000,000000000000: System.Runtime.InteropServices.COMException (0x80005008): Unknown error (0x80005008) at System.DirectoryServices.PropertyValueCollection.PopulateList I've taken a memorydump when I catch the Exception for further analysis in WinDBG. After switching to the right thread I executed the !CLRStack command: 000000001b8ab6d8 000000007708671a [NDirectMethodFrameStandalone: 000000001b8ab6d8] Common.MemoryDump.MiniDumpWriteDump(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab680 000007ff002808d8 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr, Int32, IntPtr, MINIDUMP_TYPE, IntPtr, IntPtr, IntPtr) 000000001b8ab780 000007ff00280812 Common.MemoryDump.CreateMiniDump(System.String) 000000001b8ab7e0 000007ff0027b218 DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8ad6d8 000007fef8816869 [HelperMethodFrame: 000000001b8ad6d8] 000000001b8ad820 000007feec2b6c6f System.DirectoryServices.PropertyValueCollection.PopulateList() 000000001b8ad860 000007feec225f0f System.DirectoryServices.PropertyValueCollection..ctor(System.DirectoryServices.DirectoryEntry, System.String) 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) 000000001b8ad8f0 000007ff00274d34 Common.DirectoryEntryExtension.GetStringAttribute(System.String) 000000001b8ad940 000007ff0027f507 DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) 000000001b8ad980 000007ff0027a7cf DirectoryServiceLib.LdapProvider.Directory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adbe0 000007ff00279532 DirectoryServiceLib.WCFDirectory.CreatePost(System.String, DirectoryServiceLib.Model.Post, DirectoryServiceLib.Model.Presumptions, Services.Common.SourceEnum, System.String) 000000001b8adc60 000007ff001f47bd DynamicClass.SyncInvokeCreatePost(System.Object, System.Object[], System.Object[]) My conclusion is that it fails when the code is calling System.DirectoryServices.PropertyCollection.get_Item(System.String). So after issuing an !CLRStack -a I get this result: 000000001b8ad8a0 000007feec22d023 System.DirectoryServices.PropertyCollection.get_Item(System.String) PARAMETERS: this = <no data> propertyName = <no data> LOCALS: <CLR reg> = 0x0000000001dcef78 <no data> My very first question is why does it display no data on the propertyname? I am kinda new on Windbg. However I executed an dumpobject on = 0x0000000001dcef78: 0:013> !do 0x0000000001dcef78 Name: System.String MethodTable: 000007fef66d6960 EEClass: 000007fef625eec8 Size: 74(0x4a) bytes File: C:\Windows\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll String: personalprescriptioncode Fields: MT Field Offset Type VT Attr Value Name 000007fef66dc848 40000ed 8 System.Int32 1 instance 24 m_stringLength 000007fef66db388 40000ee c System.Char 1 instance 70 m_firstChar 000007fef66d6960 40000ef 10 System.String 0 shared static Empty >> Domain:Value 0000000000174e10:00000000019d1420 000000001a886f50:00000000019d1420 << So when the source code wants to fetch the personalprescriptioncode from Active Directory(what is used for persistence layer) it fails. Looking back at the stack it is when issuing the Copy method. DirectoryServiceLib.LdapProvider.DirectoryPost.Copy(DirectoryServiceLib.LdapProvider.DirectoryPost) So looking in the sourcecode: DirectoryPost postInLimbo = DirectoryPostFactory.Instance().GetDirectoryPost(LdapConfigReader.Instance().GetConfigValue("LimboDN"), idGenPerson.ID.UserId); if (postInLimbo != null) newPost.Copy(postInLimbo); This code is looking for another post in OU=limbo with the same UserId and if it finds one it copies the attributes to the new post. In this case it does and it fails with personalprescriptioncode. I've looked in Active Directory under OU=Limbo and the post exist there with the attribute personalprescriptioncode=31243. Question 1: Why does it display no data for some of the PARAMETERS and LOCALS? Is it the GC who has cleaned up before the memorydump had been created. Question 2: Is there anymore i can do to get to the solution to this problem?

    Read the article

  • Crop circular or elliptical image from original UIImage

    - by vikas ojha
    I am working on openCV for detecting the face .I want face to get cropped once its detected.Till now I got the face and have marked the rect/ellipse around it on iPhone. Please help me out in cropping the face in circular/elliptical pattern (UIImage *) opencvFaceDetect:(UIImage *)originalImage { cvSetErrMode(CV_ErrModeParent); IplImage *image = [self CreateIplImageFromUIImage:originalImage]; // Scaling down /* Creates IPL image (header and data) ----------------cvCreateImage CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels ); */ IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3); /*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/ cvPyrDown(image, small_image, CV_GAUSSIAN_5x5); int scale = 2; // Load XML NSString *path = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_default" ofType:@"xml"]; CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL); // Check whether the cascade has loaded successfully. Else report and error and quit if( !cascade ) { NSLog(@"ERROR: Could not load classifier cascade\n"); //return; } //Allocate the Memory storage CvMemStorage* storage = cvCreateMemStorage(0); // Clear the memory storage which was used before cvClearMemStorage( storage ); CGColorSpaceRef colorSpace; CGContextRef contextRef; CGRect face_rect; // Find whether the cascade is loaded, to find the faces. If yes, then: if( cascade ) { CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20)); cvReleaseImage(&small_image); // Create canvas to show the results CGImageRef imageRef = originalImage.CGImage; colorSpace = CGColorSpaceCreateDeviceRGB(); contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width * 4, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault); //VIKAS CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef); CGContextSetLineWidth(contextRef, 4); CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5); // Draw results on the iamge:Draw all components of face in the form of small rectangles // Loop the number of faces found. for(int i = 0; i < faces->total; i++) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; // Calc the rect of faces // Create a new rectangle for drawing the face CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i); // CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef, // CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale)); face_rect = CGContextConvertRectToDeviceSpace(contextRef, CGRectMake(cvrect.x*scale, cvrect.y , cvrect.width*scale , cvrect.height*scale*1.25 )); facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate]; facedetectapp.grabcropcoordrect=face_rect; NSLog(@" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height); CGContextStrokeRect(contextRef, face_rect); //CGContextFillEllipseInRect(contextRef,face_rect); CGContextStrokeEllipseInRect(contextRef,face_rect); [pool release]; } } CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect); UIImage *returnImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); cvReleaseMemStorage(&storage); cvReleaseHaarClassifierCascade(&cascade); return returnImage; } } Thanks Vikas

    Read the article

  • How does versioning work when using Boost Serialization for Derived Classes?

    - by Venkata Adusumilli
    When a Client serializes the following data: InternationalStudent student; student.id("Client ID"); student.firstName("Client First Name"); student.country("Client Country"); the Server receives the following: ID = "Client ID" Country = "Client First Name" instead of the following: ID = "Client ID" Country = "Client Country" The only difference between the Server and Client classes is the First Name of the Student. How can we make the Server ignore First Name recieved from the Client and process the Country? Server Side Classes class Student { public: Student(){} virtual ~Student(){} public: std::string id() { return idM; } void id(std::string id) { idM = id; } protected: friend class boost::serialization::access; protected: std::string idM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(idM); } }; class InternationalStudent : public Student { public: InternationalStudent() {} ~InternationalStudent() {} public: std::string country() { return countryM; } void country(std::string country) { countryM = country; } protected: friend class boost::serialization::access; protected: std::string countryM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(boost::serialization::base_object<Student>(*this)); archive & BOOST_SERIALIZATION_NVP(countryM); } }; Client Side Classes class Student { public: Student(){} virtual ~Student(){} public: std::string id() { return idM; } void id(std::string id) { idM = id; } std::string firstName() { return firstNameM; } void firstName(std::string name) { firstNameM = name; } protected: friend class boost::serialization::access; protected: std::string idM; std::string firstNameM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(idM); if (version >=1) { archive & BOOST_SERIALIZATION_NVP(firstNameM); } } }; BOOST_CLASS_VERSION(Student, 1) class InternationalStudent : public Student { public: InternationalStudent() {} ~InternationalStudent() {} public: std::string country() { return countryM; } void country(std::string country) { countryM = country; } protected: friend class boost::serialization::access; protected: std::string countryM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(boost::serialization::base_object<Student>(*this)); archive & BOOST_SERIALIZATION_NVP(countryM); } };

    Read the article

  • Need some help deciphering a line of assembler code, from .NET JITted code

    - by Lasse V. Karlsen
    In a C# constructor, that ends up with a call to this(...), the actual call gets translated to this: 0000003d call dword ptr ds:[199B88E8h] What is the DS register contents here? I know it's the data-segment, but is this call through a VMT-table or similar? I doubt it though, since this(...) wouldn't be a call to a virtual method, just another constructor. I ask because the value at that location seems to be bad in some way, if I hit F11, trace into (Visual Studio 2008), on that call-instruction, the program crashes with an access violation. The code is deep inside a 3rd party control library, where, though I have the source code, I don't have the assemblies compiled with enough debug information that I can trace it through C# code, only through the disassembler, and then I have to match that back to the actual code. The C# code in question is this: public AxisRangeData(AxisRange range) : this(range, range.Axis) { } Reflector shows me this IL code: .maxstack 8 L_0000: ldarg.0 L_0001: ldarg.1 L_0002: ldarg.1 L_0003: callvirt instance class DevExpress.XtraCharts.AxisBase DevExpress.XtraCharts.AxisRange::get_Axis() L_0008: call instance void DevExpress.XtraCharts.Native.AxisRangeData::.ctor(class DevExpress.XtraCharts.ChartElement, class DevExpress.XtraCharts.AxisBase) L_000d: ret It's that last call there, to the other constructor of the same class, that fails. The debugger never surfaces inside the other method, it just crashes. The disassembly for the method after JITting is this: 00000000 push ebp 00000001 mov ebp,esp 00000003 sub esp,14h 00000006 mov dword ptr [ebp-4],ecx 00000009 mov dword ptr [ebp-8],edx 0000000c cmp dword ptr ds:[18890E24h],0 00000013 je 0000001A 00000015 call 61843511 0000001a mov eax,dword ptr [ebp-4] 0000001d mov dword ptr [ebp-0Ch],eax 00000020 mov eax,dword ptr [ebp-8] 00000023 mov dword ptr [ebp-10h],eax 00000026 mov ecx,dword ptr [ebp-8] 00000029 cmp dword ptr [ecx],ecx 0000002b call dword ptr ds:[1889D0DCh] // range.Axis 00000031 mov dword ptr [ebp-14h],eax 00000034 push dword ptr [ebp-14h] 00000037 mov edx,dword ptr [ebp-10h] 0000003a mov ecx,dword ptr [ebp-0Ch] 0000003d call dword ptr ds:[199B88E8h] // this(range, range.Axis)? 00000043 nop 00000044 mov esp,ebp 00000046 pop ebp 00000047 ret Basically what I'm asking is this: What the purpose of the ds:[ADDR] indirection here? VMT-table is only for virtual isn't it? and this is constructor Could the constructor have yet to be JITted, which could mean that the call would actually call through a JIT shim? I'm afraid I'm in deep water here, so anything might and could help. Edit: Well, the problem just got worse, or better, or whatever. We are developing the .NET feature in a C# project in a Visual Studio 2008 solution, and debugging and developing through Visual Studio. However, in the end, this code will be loaded into a .NET runtime hosted by a Win32 Delphi application. In order to facilitate easy experimentation of such features, we can also configure the Visual Studio project/solution/debugger to copy the produced dll's to the Delphi app's directory, and then execute the Delphi app, through the Visual Studio debugger. Turns out, the problem goes away if I run the program outside of the debugger, but during debugging, it crops up, every time. Not sure that helps, but since the code isn't slated for production release for another 6 months or so, then it takes some of the pressure off of it for the test release that we have soon. I'll dive into the memory parts later, but probably not until over the weekend, and post a followup.

    Read the article

  • Undefined referencec to ...

    - by Patrick LaChance
    I keep getting this error message every time I try to compile, and I cannot find out what the problem is. any help would be greatly appreciated: C:\DOCUME~1\Patrick\LOCALS~1\Temp/ccL92mj9.o:main.cpp:(.txt+0x184): undefined reference to 'List::List()' C:\DOCUME~1\Patrick\LOCALS~1\Temp/ccL92mj9.o:main.cpp:(.txt+0x184): undefined reference to 'List::add(int)' collect2: ld returned 1 exit status code: //List.h ifndef LIST_H define LIST_H include //brief Definition of linked list class class List { public: /** \brief Exception for operating on empty list */ class Empty : public std::exception { public: virtual const char* what() const throw(); }; /** \brief Exception for invalid operations other than operating on an empty list */ class InvalidOperation : public std::exception { public: virtual const char* what() const throw(); }; /** \brief Node within List */ class Node { public: /** data element stored in this node */ int element; /** next node in list / Node next; /** previous node in list / Node previous; Node (int element); ~Node(); void print() const; void printDebug() const; }; List(); ~List(); void add(int element); void remove(int element); int first()const; int last()const; int removeFirst(); int removeLast(); bool isEmpty()const; int size()const; void printForward() const; void printReverse() const; void printDebug() const; /** enables extra output for debugging purposes */ static bool traceOn; private: /** head of list */ Node* head; /** tail of list */ Node* tail; /** count of number of nodes */ int count; }; endif //List.cpp I only included the parts of List.cpp that might be the issue include "List.h" include include using namespace std; List::List() { //List::size = NULL; head = NULL; tail = NULL; } List::~List() { Node* current; while(head != NULL) { current = head- next; delete current-previous; if (current-next!=NULL) { head = current; } else { delete current; } } } void List::add(int element) { Node* newNode; Node* current; newNode-element = element; if(newNode-element head-element) { current = head-next; } else { head-previous = newNode; newNode-next = head; newNode-previous = NULL; return; } while(newNode-element current-element) { current = current-next; } if(newNode-element <= current-element) { newNode-previous = current-previous; newNode-next = current; } } //main.cpp include "List.h" include include using namespace std; //void add(int element); int main (char** argv, int argc) { List* MyList = new List(); bool quit = false; string value; int element; while(quit==false) { cinvalue; if(value == "add") { cinelement; MyList-add(element); } if(value=="quit") { quit = true; } } return 0; } I'm doing everything I think I'm suppose to be doing. main.cpp isn't complete yet, just trying to get the add function to work first. Any help will be greatly appreciated.

    Read the article

  • taglib link errors

    - by Vihaan Verma
    I m using taglib for one of my projects . The Debug/Release library is build using MSVC 10. On compiling the code with the library in taglib/taglib/Release some linker error are thrown . id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class TagLib::AudioPropertie s * __cdecl TagLib::FileRef::audioProperties(void)const " (__imp_?audioProperties@FileRef@TagLib@@QEBAPEAVAudioProp erties@2@XZ) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,st ruct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@ DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __cdecl TagLib::Stri ng::~String(void)" (__imp_??1String@TagLib@@UEAA@XZ) referenced in function "struct MetaData __cdecl ID3::getMetaDa taOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfF ile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class std::basic_string<char ,struct std::char_traits<char>,class std::allocator<char> > __cdecl TagLib::String::to8Bit(bool)const " (__imp_?to8 Bit@String@TagLib@@QEBA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@_N@Z) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class s td::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator @D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __cdecl TagLib::File Ref::~FileRef(void)" (__imp_??1FileRef@TagLib@@UEAA@XZ) referenced in function "struct MetaData __cdecl ID3::getMet aDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaData OfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: class TagLib::Tag * __cdecl TagLib::FileRef::tag(void)const " (__imp_?tag@FileRef@TagLib@@QEBAPEAVTag@2@XZ) referenced in function "struct Meta Data __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator <char> >)" (?getMetaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z ) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: bool __cdecl TagLib::FileRef ::isNull(void)const " (__imp_?isNull@FileRef@TagLib@@QEBA_NXZ) referenced in function "struct MetaData __cdecl ID3: :getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getM etaDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl TagLib::FileRef::Fil eRef(class TagLib::FileName,bool,enum TagLib::AudioProperties::ReadStyle)" (__imp_??0FileRef@TagLib@@QEAA@VFileName @1@_NW4ReadStyle@AudioProperties@1@@Z) referenced in function "struct MetaData __cdecl ID3::getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMetaDataOfFile@ID3@@YA?AU MetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) id3.cpp.1.o : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __cdecl TagLib::FileName::Fi leName(char const *)" (__imp_??0FileName@TagLib@@QEAA@PEBD@Z) referenced in function "struct MetaData __cdecl ID3:: getMetaDataOfFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >)" (?getMe taDataOfFile@ID3@@YA?AUMetaData@@V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) I m only including tag.lib from taglib/taglib/Release folder . Is there some other library I m missing out?

    Read the article

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • C - struct problems - writing

    - by Catarrunas
    Hello, I'm making a program in C, and I'mm having some troubles with memory, I think. So my problem is: I have 2 functions that return a struct. When I run only one function at a time I have no problem whatsoever. But when I run one after the other I always get an error when writting to the second struct. Function struct item* ReadFileBIN(char *name) -- reads a binary file. struct tables* getMesasInfo(char* Filename) -- reads a text file. My code is this: #include "stdafx.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> int numberOfTables=0; int numberOfItems=0; //struct tables* mesas; //struct item* Menu; typedef struct item{ char nome[100]; int id; float preco; }; typedef struct tables{ int id; int capacity; bool inUse; }; struct tables* getMesasInfo(char* Filename){ struct tables* mesas; char *c; int counter,numberOflines=0,temp=0; char *filename=Filename; FILE * G; G = fopen(filename,"r"); if (G==NULL){ printf("Cannot open file.\n"); } else{ while (!feof(G)){ fscanf(G, "%s", &c); numberOflines++; } fclose(G); } /* Memory allocate for input array */ mesas = (struct tables *)malloc(numberOflines* sizeof(struct tables*)); counter=0; G=fopen(filename,"r"); while (!feof(G)){ mesas[counter].id=counter; fscanf(G, "%d", &mesas[counter].capacity); mesas[counter].inUse= false; counter++; } fclose(G); numberOfTables = counter; return mesas; } struct item* ReadFileBIN(char *name) { int total=0; int counter; FILE *ptr_myfile; struct item my_record; struct item* Menu; ptr_myfile=fopen(name,"r"); if (!ptr_myfile) { printf("Unable to open file!"); } while (!feof(ptr_myfile)){ fread(&my_record,sizeof(struct item),1,ptr_myfile); total=total+1; } numberOfItems=total-1; Menu = (struct item *)calloc(numberOfItems , sizeof(struct item)); fseek(ptr_myfile, sizeof(struct item), SEEK_END); rewind(ptr_myfile); for ( counter=1; counter < total ; counter++) { fread(&my_record,sizeof(struct item),1,ptr_myfile); Menu[counter] = my_record; printf("Nome: %s\n",Menu[counter].nome); printf("ID: %d\n",Menu[counter].id); printf("Preco: %f\n",Menu[counter].preco); } fclose(ptr_myfile); return Menu; } int _tmain(int argc, _TCHAR* argv[]) { struct item* tt = ReadFileBIN("menu.dat"); struct tables* t = getMesasInfo("Capacity.txt"); getchar(); }** Thanks in advance.

    Read the article

  • Insane SmartGWT + GWT situation... Error on instantiating ListGridRecord?

    - by Xandel
    Hi all, I am asking this here in the hope that someone has maybe come across this situation too... I have posted this on the SmartGWT forum: I am having an issue when trying to instantiate a ListGridRecord object on my server side. I am using the ListGrid on the client side, I want to use GWT's RPC to pass back an array of ListGridRecord objects to populate the grid with. I know that SmartGWT is designed to link to a datasource but I want full control over when I populate the grid and this shouldn't be as much of a nightmare as it is to do. I have searched high and low and cannot find anyone complaining about the same thing. The exception however (listed below) has come up (in my search findings) as a possible memory error - where increasing the memory (-Xmx512m argument) has apparently solved the problem. It did not, however, sort out mine. If anyone can shed any light on this I would greatly appreciate it! Here are my details: Developing using Eclipse Galileo on Ubuntu 9.04 (Jaunty) and GWT 2.0.3, I built the initial GWT project using the webAppCreator bundled with the GWT 2.0.3 release and imported the project into Eclipse as described on the GWT Getting Started Page (as using the GWT Eclipse plugin caused even more nightmares when trying to connect to a database - this is apparently due to using the Google App Engine and turning it off as all the posts suggested only causes ClassNotFound exceptions). The line that causes the error is literally: ListGridRecord a = new ListGridRecord(); The error I get is the following: 00:00:25.916 [WARN] Exception while dispatching incoming RPC call com.google.gwt.user.server.rpc.UnexpectedException : Service method 'public abstract java.lang.String za.co.company.product.client.service.EmployeeServi ce.getAllEmployeeAsListGridRecord()' threw an unexpected exception: java.lang.UnsatisfiedLinkError: com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er()V at com.google.gwt.user.server.rpc.RPC.encodeResponseF orFailure(RPC.java:378) at com.google.gwt.user.server.rpc.RPC.invokeAndEncode Response(RPC.java:581) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServi ceServlet.doPost(AbstractRemoteServiceServlet.java :62) at javax.servlet.http.HttpServlet.service(HttpServlet .java:637) at javax.servlet.http.HttpServlet.service(HttpServlet .java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(Ser vletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(Se rvletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle( SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(Se ssionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(Co ntextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebA ppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle (RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(Htt pConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.co ntent(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser. java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpPa rser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnec tion.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(Selec tChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run (QueuedThreadPool.java:488) Caused by: java.lang.UnsatisfiedLinkError: com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er()V at com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er(Native Method) at com.smartgwt.client.core.JsObject.(JsObjec t.java:30) at za.co.company.product.server.service.EmployeeServi ceImpl.getAllEmployeeAsListGridRecord(EmployeeServ iceImpl.java:83) at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ e Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Native MethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(De legatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncode Response(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServi ceServlet.doPost(AbstractRemoteServiceServlet.java :62) at javax.servlet.http.HttpServlet.service(HttpServlet .java:637) at javax.servlet.http.HttpServlet.service(HttpServlet .java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(Ser vletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(Se rvletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle( SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(Se ssionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(Co ntextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebA ppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle (RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(Htt pConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.co ntent(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser. java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpPa rser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnec tion.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(Selec tChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run (QueuedThreadPool.java:488) Thanks in advance! Xandel

    Read the article

  • PHP Image resize - Why is the image uploaded but not resized?

    - by Hans
    BACKGROUND I have a script to upload an image. One to keep the original image and one to resize the image. 1. If the image dimensions (width & height) are within max dimensions I use a simple "copy" direct to folder UserPics. 2. If the original dimensions are bigger than max dimensions I want to resize the width and height to be within max. Both of them are uploading the image to the folder, but in case 2, the image will not be resized. QUESTION Is there something wrong with the script? Is there something wrong with the settings? SETTINGS Server: WAMP 2.0 PHP: 5.3.0 PHP.ini: GD2 enabled, Memory=128M (have tried 1000M) Tried imagetypes uploaded: jpg, jpeg, gif, and png (same result for all of them) SCRIPT //Uploaded image $filename = stripslashes($_FILES['file']['name']); //Read filetype $i = strrpos($filename,"."); if (!$i) { return ""; } $l = strlen($filename) - $i; $extension = substr($filename,$i+1,$l); $extension = strtolower($extension); //New picture name = maxid+1 (from database) $query = mysql_query("SELECT MAX(PicId) AS number FROM userpictures"); $row = mysql_fetch_array($query); $imagenumber = $row['number']+1; //New name of image (including path) $image_name=$imagenumber.'.'.$extension; $newname = "UserPics/".$image_name; //Check width and height of uploaded image list($width,$height)=getimagesize($uploadedfile); //Check memory to hold this image (added only as checkup) $imageInfo = getimagesize($uploadedfile); $requiredMemoryMB = ( $imageInfo[0] * $imageInfo[1] * ($imageInfo['bits'] / 8) * $imageInfo['channels'] * 2.5 ) / 1024; echo $requiredMemoryMB."<br>"; //Max dimensions that can be uploaded $maxwidth = 20; $maxheight = 20; // Check if dimensions shall be original if ($width > $maxwidth || $height > $maxheight) { //Make jpeg from uploaded image if ($extension=="jpg" || $extension=="jpeg" || $extension=="pjpeg" ) { $modifiedimage = imagecreatefromjpeg($uploadedfile); } elseif ($extension=="png") { $modifiedimage = imagecreatefrompng($uploadedfile); } elseif ($extension=="gif") { $modifiedimage = imagecreatefromgif($uploadedfile); } //Change dimensions if ($width > $height) { $newwidth = $maxwidth; $newheight = ($height/$width)*$newwidth; } else { $newheight = $maxheight; $newwidth = ($width/$height)*$newheight; } //Create new image with new dimensions $newdim = imagecreatetruecolor($newwidth,$newheight); imagecopyresized($newdim,$modifiedimage,0,0,0,0,$newwidth,$newheight,$width,$height); imagejpeg($modifiedimage,$newname,60); // Remove temp images imagedestroy($modifiedimage); imagedestroy($newdim); } else { // Just add picture to folder without resize (if org dim < max dim) $newwidth = $width; $newheight = $height; $copied = copy($_FILES['file']['tmp_name'], $newname); } //Add image information to the MySQL database mysql_query("SET character_set_connection=utf8", $dbh); mysql_query("INSERT INTO userpictures (PicId, Picext, UserId, Width, Height, Size) VALUES('$imagenumber', '$extension', '$_SESSION[userid]', '$newwidth', '$newheight', $size)")

    Read the article

  • ASP.net MVC DropLownList db.SaveChanges not saving selection

    - by WMIF
    I have looked through a ton of tutorials and suggestions on how to work with DropDownList in MVC. I was able to get most of it working, but the selected item is not saving into the database. I am using MVC 3 and Razor for the view. My DropDownList is getting created with the correct values and good looking HTML. When I set a breakpoint, I can see the correct selected item ID in the model getting sent to controller. When the view goes back to the index, the DropDownList value is not set. The other values save just fine. Here are the related views. The DropDownList is displaying a list of ColorModel names as text with the ID as the value. public class ItemModel { [Key] public int ItemID { get; set; } public string Name { get; set; } public string Description { get; set; } public virtual ColorModel Color { get; set; } } public class ItemEditViewModel { public int ItemID { get; set; } public string Name { get; set; } public string Description { get; set; } public int ColorID { get; set; } public IEnumerable<SelectListItem> Colors { get; set; } } public class ColorModel { [Key] public int ColorID { get; set; } public string Name { get; set; } public virtual IList<ItemModel> Items { get; set; } } Here are the controller actions. public ActionResult Edit(int id) { ItemModel itemmodel = db.Items.Find(id); ItemEditViewModel itemEditModel; itemEditModel = new ItemEditViewModel(); itemEditModel.ItemID = itemmodel.ItemID; if (itemmodel.Color != null) { itemEditModel.ColorID = itemmodel.Color.ColorID; } itemEditModel.Description = itemmodel.Description; itemEditModel.Name = itemmodel.Name; itemEditModel.Colors = db.Colors .ToList() .Select(x => new SelectListItem { Text = x.Name, Value = x.ColorID.ToString() }); return View(itemEditModel); } [HttpPost] public ActionResult Edit(ItemEditViewModel itemEditModel) { if (ModelState.IsValid) { ItemModel itemmodel; itemmodel = new ItemModel(); itemmodel.ItemID = itemEditModel.ItemID; itemmodel.Color = db.Colors.Find(itemEditModel.ColorID); itemmodel.Description = itemEditModel.Description; itemmodel.Name = itemEditModel.Name; db.Entry(itemmodel).State = EntityState.Modified; db.SaveChanges(); return RedirectToAction("Index"); } return View(itemEditModel); } The view has this for the DropDownList, and the others are just EditorFor(). @Html.DropDownListFor(model => model.ColorID, Model.Colors, "Select a Color") When I set the breakpoint on the db.Color.Find(...) line, I show this in the Locals window for itemmodel.Color: {System.Data.Entity.DynamicProxies.ColorModel_0EB80C07207CA5D88E1A745B3B1293D3142FE2E644A1A5202B90E5D2DAF7C2BB} When I expand that line, I can see the ColorID that I chose from the dropdown box, but it does not save into the database.

    Read the article

  • More localized, efficient Lowest Common Ancestor algorithm given multiple binary trees?

    - by mstksg
    I have multiple binary trees stored as an array. In each slot is either nil (or null; pick your language) or a fixed tuple storing two numbers: the indices of the two "children". No node will have only one child -- it's either none or two. Think of each slot as a binary node that only stores pointers to its children, and no inherent value. Take this system of binary trees: 0 1 / \ / \ 2 3 4 5 / \ / \ 6 7 8 9 / \ 10 11 The associated array would be: 0 1 2 3 4 5 6 7 8 9 10 11 [ [2,3] , [4,5] , [6,7] , nil , nil , [8,9] , nil , [10,11] , nil , nil , nil , nil ] I've already written simple functions to find direct parents of nodes (simply by searching from the front until there is a node that contains the child) Furthermore, let us say that at relevant times, both all trees are anywhere between a few to a few thousand levels deep. I'd like to find a function P(m,n) to find the lowest common ancestor of m and n -- to put more formally, the LCA is defined as the "lowest", or deepest node in which have m and n as descendants (children, or children of children, etc.). If there is none, a nil would be a valid return. Some examples, given our given tree: P( 6,11) # => 2 P( 3,10) # => 0 P( 8, 6) # => nil P( 2,11) # => 2 The main method I've been able to find is one that uses an Euler trace, which turns the given tree, with a node A to be the invisible parent of 0 and 1 with a depth of -1, into: A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A And from that, simply find the node between your given m and n that has the lowest number; For example, to find P(6,11), look for a 6 and an 11 on the trace. The number between them that is the lowest is 2, and that's your answer. If A is in between them, return nil. -- Calculating P(6,11) -- A-0-2-6-2-7-10-7-11-7-2-0-3-0-A-1-4-1-5-8-5-9-5-1-A ^ ^ ^ | | | m lowest n Unfortunately, I do believe that finding the Euler trace of a tree that can be several thousands of levels deep is a bit machine-taxing...and because my tree is constantly being changed throughout the course of the programming, every time I wanted to find the LCA, I'd have to re-calculate the Euler trace and hold it in memory every time. Is there a more memory efficient way, given the framework I'm using? One that maybe iterates upwards? One way I could think of would be the "count" the generation/depth of both nodes, and climb the lowest node until it matched the depth of the highest, and increment both until they find someone similar. But that'd involve climbing up from level, say, 3025, back to 0, twice, to count the generation, and using a terribly inefficient climbing-up algorithm in the first place, and then re-climbing back up. Are there any other better ways?

    Read the article

  • Where are the function address literals in c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). <UPDATE> After some reading with help from answer by Chris Dodd, what I'm looking for is literal function addresses as template parameters. Chris' answer indicates how to do this for standard functions, but how can the addresses of member functions be used as template parameters? Since the standard prohibits non-static member function addresses as template parameters (c++03 14.3.2.3), I suspect the work around is quite complicated. Any ideas for a workaround? Below the original form of the question is left as is for context. </UPDATE> The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one. Edit The intent of this question is not to implement delegates, rather to identify a pattern that will embed a conventional function call, (in immediate mode) directly into third party code, possibly a template.

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • What does a WinForm application need to be designed for usability, and be robust, clean, and profess

    - by msorens
    One of the principal problems impeding productivity in software implementation is the classic conundrum of “reinventing the wheel”. Of late I am a .NET developer and even the wonderful wizardry of .NET and Visual Studio covers only a portion of this challenging issue. Below I present my initial thoughts both on what is available and what should be available from .NET on a WinForm, focusing on good usability. That is, aspects of an application exposed to the user and making the user experience easier and/or better. (I do include a couple items not visible to the user because I feel strongly about them, such as diagnostics.) I invite you to contribute to these lists. LIST A: Components provided by .NET These are substantially complete components provided by .NET, i.e. those requiring at most trivial coding to use. “About” dialog -- add it with a couple clicks then customize. Persist settings across invocations -- .NET has the support; just use a few lines of code to glue them together. Migrate settings with a new version -- a powerful one, available with one line of code. Tooltips (and infotips) -- .NET includes just plain text tooltips; third-party libraries provide richer ones. Diagnostic support -- TraceSources, TraceListeners, and more are built-in. Internationalization -- support for tailoring your app to languages other than your own. LIST B: Components not provided by .NET These are not supplied at all by .NET or supplied only as rudimentary elements requiring substantial work to be realized. Splash screen -- a small window present during program startup with your logo, loading messages, etc. Tip of the day -- a mini-tutorial presented one bit at a time each time the user starts your app. Check for available updates -- facility to query a server to see if the user is running the latest version of your app, then provide a simple way to upgrade if a new version is found. Maximize to multiple monitors -- the canonical window allows you to maximize to a single monitor only; in my apps I allow maximizing across multiple monitors with a click. Taskbar notifier -- flash the taskbar when your backgrounded app has new info for the user. Options dialogs -- multi-page dialogs letting the user customize the app settings to his/her own preferences. Progress indicator -- for long running operations give the user feedback on how far there is left to go. Memory gauge -- an indicator (either absolute or percentage) of how much memory is used by your app. LIST C: Stylistic and/or tiny bits of functionality This list includes bits of functionality that are too tiny to merit being called a component, along with stylistic concerns (that admittedly do overlap with the Windows User Experience Interaction Guidelines). Design a form for resizing -- unless you are restricting your form to be a fixed size, use anchors and docking so that it does what is reasonable when enlarged or shrunk by the user. Set tab order on a form -- repeated tab presses by the user should advance from field to field in a logical order rather than the default order in which you added fields. Adjust controls to be aware of operating modes -- When starting a background operation with, for example, a “Go” button, disable that “Go” button until the operation completes. Provide access keys for all menu items (per UXGuide). Provide shortcut keys for commonly used menu items (per UXGuide). Set up some (global or important or common) shortcut keys without associating to menu items. Allow some menu items to be invoked with or without modifier keys (shift, control, alt) where the modifier key is useful to vary the operation slightly. Hook up Escape and Enter on child forms to do what is reasonable. Decorate any library classes with documentation-comments and attributes -- this allows Visual Studio to leverage them for Intellisense and property descriptions. Spell check your code! What else would you include?

    Read the article

  • Resolving a Forward Declaration Issue Involving a State Machine in C++

    - by hypersonicninja
    I've recently returned to C++ development after a hiatus, and have a question regarding implementation of the State Design Pattern. I'm using the vanilla pattern, exactly as per the GoF book. My problem is that the state machine itself is based on some hardware used as part of an embedded system - so the design is fixed and can't be changed. This results in a circular dependency between two of the states (in particular), and I'm trying to resolve this. Here's the simplified code (note that I tried to resolve this by using headers as usual but still had problems - I've omitted them in this code snippet): #include <iostream> #include <memory> using namespace std; class Context { public: friend class State; Context() { } private: State* m_state; }; class State { public: State() { } virtual void Trigger1() = 0; virtual void Trigger2() = 0; }; class LLT : public State { public: LLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class ALL : public State { public: ALL() { } void Trigger1() { new LLT(); } void Trigger2() { new DH(); } }; // DL needs to 'know' about DH. class DL : public State { public: DL() { } void Trigger1() { new ALL(); } void Trigger2() { new DH(); } }; class HLT : public State { public: HLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class AHL : public State { public: AHL() { } void Trigger1() { new DH(); } void Trigger2() { new HLT(); } }; // DH needs to 'know' about DL. class DH : public State { public: DH () { } void Trigger1() { new AHL(); } void Trigger2() { new DL(); } }; int main() { auto_ptr<LLT> llt (new LLT); auto_ptr<ALL> all (new ALL); auto_ptr<DL> dl (new DL); auto_ptr<HLT> hlt (new HLT); auto_ptr<AHL> ahl (new AHL); auto_ptr<DH> dh (new DH); return 0; } The problem is basically that in the State Pattern, state transitions are made by invoking the the ChangeState method in the Context class, which invokes the constructor of the next state. Because of the circular dependency, I can't invoke the constructor because it's not possible to pre-define both of the constructors of the 'problem' states. I had a look at this article, and the template method which seemed to be the ideal solution - but it doesn't compile and my knowledge of templates is a rather limited... The other idea I had is to try and introduce a Helper class to the subclassed states, via multiple inheritance, to see if it's possible to specify the base class's constructor and have a reference to the state subclasse's constructor. But I think that was rather ambitious... Finally, would a direct implmentation of the Factory Method Design Pattern be the best way to resolve the entire problem?

    Read the article

  • C++ templated factory constructor/de-serialization

    - by KRao
    Hi, I was looking at the boost serialization library, and the intrusive way to provide support for serialization is to define a member function with signature (simplifying): class ToBeSerialized { public: //Define this to support serialization //Notice not virtual function! template<class Archive> void serialize(Archive & ar) {.....} }; Moreover, one way to support serilization of derived class trough base pointers is to use a macro of the type: //No mention to the base class(es) from which Derived_class inherits BOOST_CLASS_EXPORT_GUID(Derived_class, "derived_class") where Derived_class is some class which is inheriting from a base class, say Base_class. Thanks to this macro, it is possible to serialize classes of type Derived_class through pointers to Base_class correctly. The question is: I am used in C++ to write abstract factories implemented through a map from std::string to (pointer to) functions which return objects of the desired type (and eveything is fine thanks to covariant types). Hover I fail to see how I could use the above non-virtual serialize template member function to properly de-serialize (i.e. construct) an object without knowing its type (but assuming that the type information has been stored by the serializer, say in a string). What I would like to do (keeping the same nomenclature as above) is something like the following: XmlArchive xmlArchive; //A type or archive xmlArchive.open("C:/ser.txt"); //Contains type information for the serialized class Base_class* basePtr = Factory<Base_class>::create("derived_class",xmlArchive); with the function on the righ-hand side creating an object on the heap of type Derived_class (via default constructor, this is the part I know how to solve) and calling the serialize function of xmlArchive (here I am stuck!), i.e. do something like: Base_class* Factory<Base_class>::create("derived_class",xmlArchive) { Base_class* basePtr = new Base_class; //OK, doable, usual map string to pointer to function static_cast<Derived_class*>( basePtr )->serialize( xmlArchive ); //De-serialization, how????? return basePtr; } I am sure this can be done (boost serialize does it but its code is impenetrable! :P), but I fail to figure out how. The key problem is that the serialize function is a template function. So I cannot have a pointer to a generic templated function. As the point in writing the templated serialize function is to make the code generic (i.e. not having to re-write the serialize function for different Archivers), it does not make sense then to have to register all the derived classes for all possible archive types, like: MY_CLASS_REGISTER(Derived_class, XmlArchive); MY_CLASS_REGISTER(Derived_class, TxtArchive); ... In fact in my code I relies on overloading to get the correct behaviour: void serialize( XmlArchive& archive, Derived_class& derived ); void serialize( TxtArchive& archive, Derived_class& derived ); ... The key point to keep in mind is that the archive type is always known, i.e. I am never using runtime polymorphism for the archive class...(again I am using overloading on the archive type). Any suggestion to help me out? Thank you very much in advance! Cheers

    Read the article

  • Dynamic obfuscation by self-modifying code

    - by Fallout2
    Hi all, Here what's i am trying to do: assume you have two fonction void f1(int *v) { *v = 55; } void f2(int *v) { *v = 44; } char *template; template = allocExecutablePages(...); char *allocExecutablePages (int pages) { template = (char *) valloc (getpagesize () * pages); if (mprotect (template, getpagesize (), PROT_READ|PROT_EXEC|PROT_WRITE) == -1) { perror (“mprotect”); } } I would like to do a comparison between f1 and f2 (so tell what is identical and what is not) (so get the assembly lines of those function and make a line by line comparison) And then put those line in my template. Is there a way in C to do that? THanks Update Thank's for all you answers guys but maybe i haven't explained my need correctly. basically I'm trying to write a little obfuscation method. The idea consists in letting two or more functions share the same location in memory. A region of memory (which we will call a template) is set up containing some of the machine code bytes from the functions, more specifically, the ones they all have in common. Before a particular function is executed, an edit script is used to patch the template with the necessary machine code bytes to create a complete version of that function. When another function assigned to the same template is about to be executed, the process repeats, this time with a different edit script. To illustrate this, suppose you want to obfuscate a program that contains two functions f1 and f2. The first one (f1) has the following machine code bytes Address Machine code 0 10 1 5 2 6 3 20 and the second one (f2) has Address Machine code 0 10 1 9 2 3 3 20 At obfuscation time, one will replace f1 and f2 by the template Address Machine code 0 10 1 ? 2 ? 3 20 and by the two edit scripts e1 = {1 becomes 5, 2 becomes 6} and e2 = {1 becomes 9, 2 becomes 3}. #include <stdlib.h> #include <string.h> typedef unsigned int uint32; typedef char * addr_t; typedef struct { uint32 offset; char value; } EDIT; EDIT script1[200], script2[200]; char *template; int template_len, script_len = 0; typedef void(*FUN)(int *); int val, state = 0; void f1_stub () { if (state != 1) { patch (script1, script_len, template); state = 1; } ((FUN)template)(&val); } void f2_stub () { if (state != 2) { patch (script2, script_len, template); state = 2; } ((FUN)template)(&val); } int new_main (int argc, char **argv) { f1_stub (); f2_stub (); return 0; } void f1 (int *v) { *v = 99; } void f2 (int *v) { *v = 42; } int main (int argc, char **argv) { int f1SIZE, f2SIZE; /* makeCodeWritable (...); */ /* template = allocExecutablePages(...); */ /* Computed at obfuscation time */ diff ((addr_t)f1, f1SIZE, (addr_t)f2, f2SIZE, script1, script2, &script_len, template, &template_len); /* We hide the proper code */ memset (f1, 0, f1SIZE); memset (f2, 0, f2SIZE); return new_main (argc, argv); } So i need now to write the diff function. that will take the addresses of my two function and that will generate a template with the associated script. So that is why i would like to compare bytes by bytes my two function Sorry for my first post who was not very understandable! Thank you

    Read the article

  • How do I detect a file write error in C?

    - by rich
    I have an embedded environment where a user might insert or remove a USB flash drive. I would like to know if the drive has been removed, or if there is some other problem when I try to write to the drive. However, Linux just saves the information in its buffers and returns with no indicated error. The computer I'm using comes with a 2.4.26 kernel and libc 2.3.2. I'm mounting the drive this way: i = mount(MEMORY_DEV_PATH, MEMORY_MNT_PATH, "vfat", MS_SYNCHRONOUS, NULL); That works: 50:/root # mount /dev/scsi/host0/bus0/target0/lun0/part1 on /mem type vfat (rw,sync) 50:/root # Later, I try to copy a file to it: int ifile, ofile; ifile = open("/tmp/tmpmidi.mid", O_RDONLY); if (ifile < 0) { perror("open in"); break; } ofile = open(current_file_name.c_str(), O_WRONLY | O_SYNC); if (ofile < 0) { perror("open out"); break; } #define BUFSZ 256 char buffer[BUFSZ]; while (1) { i = read(ifile, buffer, BUFSZ); if (i < 0) { perror("read"); break; } j = write(ofile, buffer, i); if (j < 0) { perror("write"); break; } if (i != j) { perror("Sizes wrong"); break; } if (i < BUFSZ) { printf("Copy is finished, I hope\n"); close(ifile); close(ofile); break; } } If this snippet of code is executed with a write-protected USB memory, the result is Copy is finished, I hope amid a flurry of error messages from the kernel on the console. I believe the same thing would happen if I simply removed the USB drive (without unmounting it). I have also fiddled with devfs. I figured out how to get it to automatically mount the drive, (with the REGISTER event) but it never seems to trigger the UNREGISTER when I pull out the memory. How can I determine in my program whether I have successfully created a file? Update 4 July: It was a silly oversight of me not to check the result from close(). Unfortunately, the file can be closed without error. So that didn't help. What about fsync()? That sounds like a good idea, but that didn't catch the error either. There might be some interesting information in /sys if I had such a thing. I believe that didn't get added until 2.6.?. The comment(s) about the quality of my flash drive are probably justified. It's one of the earlier ones. In fact, write protect switches seem to be extremely rare these days. I think I have to use the overkill option: Create a file, unmount & remount the drive, and check to see if the file is there. If that doesn't solve my problem, then something is really messed up! Note to myself: Make sure the file you try to create isn't already there! By the way, this does happen to be a C++ program. You can tell by the .c_str() which I had intended to edit out for simplicity.

    Read the article

< Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >