Search Results

Search found 14924 results on 597 pages for 'kernel mode'.

Page 355/597 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • PyML 0.7.2 - How to prevent accuracy from dropping after storing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • The speed of .NET in numerical computing

    - by Yin Zhu
    In my experience, .net is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization). I have traced the ads on stackoverflow to http://www.centerspace.net/products/ the speed is really amazing, the speed is close to native code. How can they do that? They said that: Q. Is NMath "pure" .NET? A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap. Can someone explain more to me? Thanks!

    Read the article

  • Broadcast-style Bluetooth using Sockets on the iPhone?

    - by Kyle
    Is there any way to open a broadcast bluetooth socket, take a listen and send replies? I want a proper peer to peer system where I broadcast and listen for broadcasts in an area. That way, variable clients can mingle. Is this possible? My theory is this: If GameKit can sit around wasting 25 seconds of the users time whilst having access to a broadcast socket, can't I? Or, must I be in kernel mode for such access? I'm not really sure where the proper bluetooth headers are as well. Thanks for reading!

    Read the article

  • Ninject with MembershipProvider | RoleProvider

    - by DVark
    I'm using ninject as my IoC and I wrote a role provider as follows: public class BasicRoleProvider : RoleProvider { private IAuthenticationService authenticationService; public BasicRoleProvider(IAuthenticationService authenticationService) { if (authenticationService == null) throw new ArgumentNullException("authenticationService"); this.authenticationService = authenticationService; } /* Other methods here */ } I read that Provider classes get instantiated before ninject gets to inject the instance. How do I go around this? I currently have this ninject code: Bind<RoleProvider>().To<BasicRoleProvider>().InRequestScope(); From this answer here. If you mark your dependencies with [Inject] for your properties in your provider class, you can call kernel.Inject(MemberShip.Provider) - this will assign all dependencies to your properties. I do not understand this.

    Read the article

  • OpenMP + SSE gives no speedup

    - by Sayan Ghosh
    Hi, My Professor found out this interesting experiment of 3D Linearly separable Kernel Convolution using SSE and OpenMP, and gave the task to me to benchmark the statistics on our system. The author claims a crazy 18 fold speedup from the serial approach! Might not be always, but we were expecting at least a 2-4 times speedup running this on a Dual Core Intel. http://software.intel.com/en-us/articles/16bit-3d-convolution-sse4openmp-implementation-on-penryn-cpu/#comment-41994 Alas, we could find exactly no speedup. The serial code performs always better, with or without OpenMP. I am using Linux, and observed a certain trend...when no other processes are running on the system, after a while the loadavg starts increasing, and the the %CPU utilization falls down. Another probable false positive which I ran into accidentally...I started the program, then immediately paused it. Then I ran it on background with bg, and saw a speedup of more than 2. This happens all the time! Any advice would be great. Thanks, Sayan

    Read the article

  • How do I program an AVR Raven with Linux or a Mac?

    - by Andrew McGregor
    This tutorial for programming these starts with programming the Ravens and Jackdaw with a Windows box. Can I do those initial steps with avrdude on a Linux or OS X machine instead? If so, how? Is there any risk of bricking the hardware if I just try? I have a USB JTAG ICE MKii clone, which is supposed to work for this. I'm totally new to AVR, but very experienced with C/C++ programming on Linux or OS X, up to and including kernel programming... so any hint at all would be appreciated, I can read man pages, but only if I know what I'm looking for.

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after stroing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • Consuming a USB HID device in Windows CE 6.0 using c#

    - by kersny
    I am working on an Embedded Windows CE project and am interested in accessing a USB HID device through one of its USB Host ports. All I really need to read are the raw HID spec packets. On a windows computer, I have a working program using hid.dll, but as far as I have researched, there is no equivalent on CE. I know there is the usbhid.dll, but I'm not sure if it is applicable for this situation. I would prefer not to write a kernel level driver, as I would like to do my coding in c#. Has anyone had experience consuming an HID device on Windows CE?

    Read the article

  • Sniffing LPT Traffic

    - by ArcherT
    I need to intercept LPT output traffic. After a couple of hours of research, I've come to understand that the only way to do this is by writing a kernel-mode driver, more precisely a "filter driver"...? I've downloaded the WDK, but the terminology and vast number of driver types is a little overwhelming. I'm basically trying to understand what kind of driver I should be writing; my target environment is Windows XP SP2 and 3 only. Some background info, if it matters: I have a bunch of legacy DOS apps that print to LPT1. I'd like to be able to capture this output and redirect this data (after GDI calls) to a modern USB (network) printer. Fortunately, the latter part of the problem's easy. I'm hoping someone could point me in the right direction. TIA.

    Read the article

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • Raw socket sendto() failure in OS X

    - by user37278
    When I open a raw socket is OS X, construct my own udp packet (headers and data), and call sendto(), I get the error "Invalid Argument". Here is a sample program "rawudp.c" from the web site http://www.tenouk.com/Module43a.html that demonstrates this problem. The program (after adding string and stdlib #includes) runs under Fedora 10 but fails with "Invalid Argument" under OS X. Can anyone suggest why this fails in OS X? I have looked and looked and looked at the sendto() call, but all the parameters look good. I'm running the code as root, etc. Is there perhaps a kernel setting that prevents even uid 0 executables from sending packets through raw sockets in OS X Snow Leopard? Thanks.

    Read the article

  • Oracle Application Server 10.1.3.5 Security issue.

    - by Marius Bogdan IONESCU
    Hello! we are tying to port a J2EE app from OAS 9.0.4 (working perfectly) on OAS 10.1.3.5 the reson we do that is because we need the app compiled with java 1.5 and OAS 10.1.3.5 would be the single major version supporting that binaries which has oc4j/orion kernel. The issue is that the security constraints in matter of user/group/role are not read by the app server, and instead of asking for these sets of users, i have to use the oc4jadmin instead the selected users for auth. All xml files needed for describing these sets of rules are being checked with the OAS book, and it seems they are correctly filled in... anybody has an idea about this?

    Read the article

  • Optimize grep, awk and sed shell stuff

    - by kockiren
    I try to sum the traffic of diffrent ports in the logfiles from "IPCop" so i write and command for my shell, but i think its possible to optimize the command. First a Line from my Logfile: 01/00:03:16 kernel INPUT IN=eth1 OUT= MAC=xxx SRC=xxx DST=xxx LEN=40 TOS=0x00 PREC=0x00 TTL=98 ID=256 PROTO=TCP SPT=47438 DPT=1433 WINDOW=16384 RES=0x00 SYN URGP=0 Now i grep with following Command the sum of all lengths who contains port 1433 grep 1433 log.dat|awk '{for(i=1;i<=10;i++)if($i ~ /LEN/)print $i};'|sed 's/LEN=//g;'|awk '{sum+=$1}END{print sum}' The for loop i need because the LEN-col is not on same position at all time. Any suggestion for optimizing this command? Regards Rene

    Read the article

  • CUDA threads output different value

    - by kar
    HAi, I wrote a cuda program , i have given the kernel function below. The device memory is allocated through CUDAMalloc(); the value of *md is 10; __global__ void add(int *md) { int x,oper=2; x=threadIdx.x; * md = *md*oper; if(x==1) { *md = *md*0; } if(x==2) { *md = *md*10; } if(x==3) { *md = *md+1; } if(x==4) { *md = *md-1; } } executed the above code add<<<1,5>>(*md) , add<<<1,4>>>(*md) for <<<1,5>>> the output is 19 for <<<1,4>>> the output is 21 1) I have doubt that cudaMalloc() will allocate in device main memory? 2) Why the last thread alone is executed always in the above program? Thank you

    Read the article

  • Doubts in System call mechanism in linux

    - by bala1486
    We transit from ring3 to ring0 using 'int' or the new 'syscall/sysenter' instruction. Does that mean that the page tables and other stuffs that needs to be modified for the kernel is automatically done by the 'int' instruction or the interrupt handler for the 'int 0x80' will do the required stuff and jump to the respective system call. Also when returning from a system call, we again need to go to user space. For this we need to know the instruction address in the user space to continue the user application. Where is that address stored. Does the 'ret' instruction automatically changes the ring from ring3 to ring0 or where/how this ring changing mechanism takes place? Then, i read that changing from ring3 to ring0 is not as costly as changing from ring0 to ring3. Why is this so?? Thanks, Bala

    Read the article

  • Why do C compilers prepend underscores to external names?

    - by Michael Burr
    I've been working in C for so long that the fact that compilers typically add an underscore to the start of an extern is just understood... However, another SO question today got me wondering about the real reason why the underscore is added. A wikipedia article claims that a reason is: It was common practice for C compilers to prepend a leading underscore to all external scope program identifiers to avert clashes with contributions from runtime language support I think there's at least a kernel of truth to this, but also it seems to no really answer the question, since if the underscore is added to all externs it won't help much with preventing clashes. Does anyone have good information on the rationale for the leading underscore? Is the added underscore part of the reason that the Unix creat() system call doesn't end with an 'e'? I've heard that early linkers on some platforms had a limit of 6 characters for names. If that's the case, then prepending an underscore to external names would seem to be a downright crazy idea (now I only have 5 characters to play with...).

    Read the article

  • When to best implement a I2C driver module in Linux

    - by stefangachter
    I am currently dealing with two devices connected to the I2C bus within an embedded system running Linux. I am using an exisiting driver for the first device, a camera. For the second device, I have successfully implemented a userspace program with which I can communicate with the second device. So far, both devices seem to coexist happily. However, almost all I2C devices have their own driver module. Thus, I am wondering what the advantages of a driver module are. I had a look at the following thread... http://stackoverflow.com/questions/149032/when-should-i-write-a-linux-kernel-module ... but without conclusion. Thus, what would be the advantage of writing a I2C driver module over a userspace implementation? Regards, Stefan

    Read the article

  • How to register a function in a driver code as its ISR

    - by CVS-2600Hertz-wordpress-com
    Following the feedback i got from: http://stackoverflow.com/questions/2683682/new-to-linux-kernel-driver-development/2683819 I have written a driver (.c file) by comparing it with an existing driver and "borrowing" heavily from its code. The driver is registered fine and init() and probe() are working fine. I am also able to access the peripheral device registers. :-) However i am a bit hazy about the IRQ/ISR. The peripheral-device is a input device and raises an interrupt on a GPIO pin. How do i move ahead from my current state ( init(), probe(), etc. ) to be able to handle the IRQ and execute my ISR function?? Many-Thanks in Advance

    Read the article

  • cannot access new drive through nfs

    - by l.thee.a
    I am running nfs-kernel-server to access my files on my linux machine(ubuntu - /share). The disk I have been using is full. So I have added a new disk and mounted it to /share/data. My other pc mounts the /share folder to /mnt/nfs; but cannot see the contents of /mnt/nfs/data. I have tried adding /share/data to /etc/exports, but it did not help. What do I do?

    Read the article

  • Ruby Metaprogramming

    - by VP
    I'm trying to write a DSL that allows me to do Policy.name do author "Foo" reviewed_by "Bar" end The following code can almost process it: class Policy include Singleton def self.method_missing(name,&block) puts name puts "#{yield}" end def self.author(name) puts name end def self.reviewed_by(name) puts name end end Defining my method as class methods (self.method_name) i can access it using the following syntax: Policy.name do Policy.author "Foo" Policy.reviewed_by "Bar" end If i remove the "self" from the method names, and try to use my desired syntax, then i receive an error "Method not Found" in the Main so it could not find my function until the module Kernel. Its ok, i understand the error. But how can i fix it? How can i fix my class to make it work with my desired syntax that?

    Read the article

  • detect sender of signal (linux, ptrace)

    - by osgx
    Hello Can I distinguish signal, between delivered directly to a process and delivered via debugger. Case 1: $ ./process1 process1 (not ptraced) set up handler alarm(5); .... signal is handled and I can parse handler parameters Case 2: $ debugger1 ./process1 process1 (is ptraced by debugger1) set up handler alarm(5); ... signal is catched by debugger1. It resumes process1 with PTRACE_CONT, signal_number is 4th parameter of PTRACE_CONT. signal is redelivered to process1 it is handled. So, how can I detect in signal handler, was it redelivered by debugger or send by system? OS is Linux, kernel is 2.6.30. Programs are written in plain C.

    Read the article

  • Virtual drivers with Windows Driver Model - where to begin?

    - by waitinforatrain
    I've never written drivers before but I'm starting an open-source project that involves creating virtual MIDI ports that will send the MIDI data over a network. For this, I presume I would be creating some sort of virtual driver using WDM (unless it's possible with kernel hooks?) - but being a beginner to driver development I don't know where to begin. Does anyone know any useful resources that would help me with this project? Or some open-source code from a similar project that I could fork as a starting point?

    Read the article

  • How do I create a downscaled copy of an FBO in OpenGL?

    - by Jasper Bekkers
    Hi, In order to speed up some post-processing shaders I'm using, I need to perform these operations on a framebuffer that is smaller in size than the actual window (about 1/4th or more). Most of the effects I want to optimize are simple blurring operations that could be replaced (for a large part) by smaller kernel and bilinear filtering. Thus, I need to create a copy of the current FBO into another one. However, I couldn't find anything, that works, on how to do this. I've tried using glBlitframebufferEXT and rendering a fullscreen quad into the other framebuffer, but both paths result in a black texture as output. How do I go about solving this problem?

    Read the article

  • Ninject: Abstract Class

    - by Pickels
    Hello, Do I need to do something different in an abstract class to get dependency injection working with Ninject? I have a base controller with the following code: public abstract class BaseController : Controller { public IAccountRepository AccountRepository { get; set; } } My module looks like this: public class WebDependencyModule : NinjectModule { public override void Load() { Bind<IAccountRepository>().To<AccountRepository>(); } } And this is my Global.asax: protected override void OnApplicationStarted() { Kernel.Load(new WebDependencyModule()); } protected override IKernel CreateKernel() { return new StandardKernel(); } It works when I decorate the IAccountRepository property with the [Inject] attribute. Thanks in advance.

    Read the article

  • Pros and Cons of Different macro function / inline methods in C

    - by Robert S. Barnes
    According to the C FAQ, there are basically 3 practical methods for "inlining" code in C: #define MACRO(arg1, arg2) do { \ /* declarations */ \ stmt1; \ stmt2; \ /* ... */ \ } while(0) /* (no trailing ; ) */ or #define FUNC(arg1, arg2) (expr1, expr2, expr3) To clarify this one, the arguments are used in the expressions, and the comma operator returns the value of the last expression. or using the inline declaration which is supported as an extension to gcc and in the c99 standard. The do { ... } while (0) method is widely used in the Linux kernel, but I haven't encountered the other two methods very often if at all. I'm referring specifically to multi-statement "functions", not single statement ones like MAX or MIN. What are the pros and cons of each method, and why would you choose one over the other in various situations?

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >